)
The coronavirus affected all of our lives severely in the last year; whether we got sick or not, whether we had severe symptoms or not. Therefore, when I saw a competition in Kaggle with the goal to find a better way to diagnose COVID19 cases, I was interested in the challenge both on the academic and personal levels. I find this also an opportunity to participate in a live competition and be a part of the Kaggle community.
Here in Israel, it can be said that the Coronavirus is over. (This sentence was written three months ago, and was left here as a warning from overoptimism.) After an amazing vaccine campaign, and after over 5 million people got vaccinated in a very short period of time, the pandemic has almost completely stopped.
But all over the world, the pandemic is still spreading. As of today, there are 14M active cases worldwide and about 500K daily cases on average. All over the world, governments are racing against the virus by running their own vaccine campaigns. But the virus is still faster. So the effort to slow down the disease spreading is now of most importance.
As we all learned in the last year, one of the key tools in this context is the early and large-scale detection of infections. The main method used for infection detection now is PCR tests. But these tests have their own disadvantages - their costs make it difficult to apply them at a large scale, and they have a lower bound on the results time, so it seems to be a good idea to search for additional tools for infection detection.
It's well known that COVID19 causes shortness of breath. But this phenomenon can also serve us - if the virus influences the lungs so strongly, we can try to detect the infection by examining the lungs.
In this project, we will develop a way to use chest radiographs (CXR) for COVID19 infection detection. This could be a fast way to early determine COVID19 infection and could be another stone in the effort to block the virus.
Follows is the description of the competition from the Kaggle website:
Five times more deadly than the flu, COVID-19 causes significant morbidity and mortality. Like other pneumonias, pulmonary infection with COVID-19 results in inflammation and fluid in the lungs. COVID-19 looks very similar to other viral and bacterial pneumonias on chest radiographs, which makes it difficult to diagnose. Your computer vision model to detect and localize COVID-19 would help doctors provide a quick and confident diagnosis. As a result, patients could get the right treatment before the most severe effects of the virus take hold.

Currently, COVID-19 can be diagnosed via polymerase chain reaction to detect genetic material from the virus or chest radiograph. However, it can take a few hours and sometimes days before the molecular test results are back. By contrast, chest radiographs can be obtained in minutes. While guidelines exist to help radiologists differentiate COVID-19 from other types of infection, their assessments vary. In addition, non-radiologists could be supported with better localization of the disease, such as with a visual bounding box.
As the leading healthcare organization in their field, the Society for Imaging Informatics in Medicine (SIIM)'s mission is to advance medical imaging informatics through education, research, and innovation. SIIM has partnered with the Foundation for the Promotion of Health and Biomedical Research of Valencia Region (FISABIO), Medical Imaging Databank of the Valencia Region (BIMCV) and the Radiological Society of North America (RSNA) for this competition.
In this competition, you’ll identify and localize COVID-19 abnormalities on chest radiographs. In particular, you'll categorize the radiographs as negative for pneumonia or typical, indeterminate, or atypical for COVID-19. You and your model will work with imaging data and annotations from a group of radiologists.
If successful, you'll help radiologists diagnose the millions of COVID-19 patients more confidently and quickly. This will also enable doctors to see the extent of the disease and help them make decisions regarding treatment. Depending upon severity, affected patients may need hospitalization, admission into an intensive care unit, or supportive therapies like mechanical ventilation. As a result of better diagnosis, more patients will quickly receive the best care for their condition, which could mitigate the most severe effects of the virus.
This challenge, as well as the dataset itself, is composed of two levels. The first is the image level which contains the chest radiographs, and above it we have the study level, which contains the general conclusion from all the patient radiographs.
On the study level, each study is classified by specialists as Negative for Pneumonia, or as Typical Appearance, Indeterminate Appearance, or Atypical Appearance to Covid-19.
The grading system is based on this paper which proposes a new reporting language for chest radiographs (CXR) findings related to COVID-19, as described in the following table (Table 1 in the paper):
Radiographic Classification CXR Findings Suggested Reporting Language Typical appearance Multifocal bilateral, peripheral opacities Opacities with rounded morphology Lower lung–predominant distribution “Findings typical of COVID-19 pneumonia are present. However, these can overlap with other infections, drug reactions, and other causes of acute lung injury” Indeterminate appearance Absence of typical findings AND Unilateral, central or upper lung predominant distribution “Findings indeterminate for COVID-19 pneumonia and which can occur with a variety of infections and noninfectious conditions” Atypical appearance Pneumothorax or pleural effusion Pulmonary edema Lobar consolidation Solitary lung nodule or mass Diffuse tiny nodules Cavity “Findings atypical or uncommonly reported for COVID-19 pneumonia. Consider alternative diagnoses” Negative for pneumonia No lung opacities “No findings of pneumonia. However, chest radiographic findings can be absent early in the course of COVID-19 pneumonia”
Although these findings refer to the CXR themselves, on this challenge we were provided with these labels only at the study level, while each study can have many images. On image level, each image has a list of bounding boxes of findings. The bounding boxes can contain findings from different types, as described by the competition hosts:
Bounding boxes were placed on lung opacities, whether typical or indeterminate. Bounding boxes were also placed on some atypical findings including solitary lobar consolidation, nodules/masses, and cavities. Bounding boxes were not placed on pleural effusions, or pneumothoraces. No bounding boxes were placed for the negative for pneumonia category.
The dataset doesn't distinguish between the findings type. The findings were given the label opacity, and the prediction in the submission for the findings class should be always opacity.
The details of the study grading method according to the findings in the images described in the table above. Even though the exact meaning of the terminology is definitely beyond my understanding, one thing we can learn from this table is that the classifying is based on the nature of findings, as well as on their region in the lungs. This is crucial for a better understanding of what our model is supposed to learn.
!pip install numpy --upgrade 1>/dev/nell 2>&1
!pip install python-gdcm 1>/dev/null 2>&1
from pathlib import Path
import sys
from ast import literal_eval
import numpy as np
import pandas as pd
import matplotlib as mlp
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import seaborn as sns
print(sys.version)
mlp.rcParams['figure.figsize'] = (15, 7)
3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 16:07:37) [GCC 9.3.0]
To run this notebook I switched between Kaggle kernels and Google Colab. The main advantage of the Kaggle kernels is that the competition data came built-in for the kernel, and the disk is relatively fast. On the other hand, Google colab is a much more convenient environment, not to mention the free GPU while Kaggle kernels are limited to 30 GPU hours per week. Downloading the dataset from Colab and saving it on a mounted drive doesn't work - the data was too big and crashed the kernel. To be able to work on Google colab, I downloaded all the competition dataset on a local machine with a high upload rate and uploaded it to Google Drive. Then I could mount my drive on Colab kernel and get access to the data - but the performance that way was much worse than on Kaggle. The code below is used to switch between the environments.
is_colab = 'google.colab' in sys.modules
if is_colab:
from google.colab import drive
drive.mount('/content/drive')
path = Path('/content/drive/MyDrive/covid19-detection/data')
else:
path = Path('/kaggle/input/siim-covid19-detection')
The dataset is composed of three parts. The CXR files in DICOM format, and two metadata tables: one for the image level and another for the study level. Let's explore first the image-level metadata of the training set.
image_df = pd.read_csv(path/'train_image_level.csv', index_col='id')
image_df.head()
| boxes | label | StudyInstanceUID | |
|---|---|---|---|
| id | |||
| 000a312787f2_image | [{'x': 789.28836, 'y': 582.43035, 'width': 102... | opacity 1 789.28836 582.43035 1815.94498 2499.... | 5776db0cec75 |
| 000c3a3f293f_image | NaN | none 1 0 0 1 1 | ff0879eb20ed |
| 0012ff7358bc_image | [{'x': 677.42216, 'y': 197.97662, 'width': 867... | opacity 1 677.42216 197.97662 1545.21983 1197.... | 9d514ce429a7 |
| 001398f4ff4f_image | [{'x': 2729, 'y': 2181.33331, 'width': 948.000... | opacity 1 2729 2181.33331 3677.00012 2785.33331 | 28dddc8559b2 |
| 001bd15d1891_image | [{'x': 623.23328, 'y': 1050, 'width': 714, 'he... | opacity 1 623.23328 1050 1337.23328 2156 opaci... | dfd9fdd85a3e |
Before doing anything else, we'd like to change this terrible column name StudyInstanceUID to a more reasonable one.
image_df = image_df.rename(columns={'StudyInstanceUID':'study_id'})
Now it's much better.
For each image we are provided with image id, study id, the findings bounding boxes, and labels for each bounding box. Let's examine the label column first. The content of the label column corresponds to the submission's desired format. It contains a description of an unlimited number of finding, separated by whitespace. Each of the descriptions contains 6 fields, also separated by whitespace, as follows:
finding_label confidence xmin ymin xmax ymax
The content of this row is as this pattern, repeated as the number of this image's findings. So if we have $k$ finding for specific image, the label row will be:
finding_label_1 confidence_1 xmin_1 ymin_1 xmax_1 ymax_1 finding_label_2 ... finding_label_k confidence_k xmin_k ymin_k xmax_k ymax_k
Let's extract these values.
def extract_label(row):
values = row.label.split()
if len(values) % 6 != 0:
# corrupted row
print(f'row #{row.index}: wrong number of paramerers in label field')
return [dict(zip('id finding_id label confident xmin ymin xmax ymax'.split(), [row.name, i] + values[6*i:6*(i+1)])) for i in range(len(values) // 6)]
findings = pd.DataFrame.from_dict(image_df.apply(extract_label, axis=1).sum()).set_index('id')
findings.head()
| finding_id | label | confident | xmin | ymin | xmax | ymax | |
|---|---|---|---|---|---|---|---|
| id | |||||||
| 000a312787f2_image | 0 | opacity | 1 | 789.28836 | 582.43035 | 1815.94498 | 2499.73327 |
| 000a312787f2_image | 1 | opacity | 1 | 2245.91208 | 591.20528 | 3340.5737 | 2352.75472 |
| 000c3a3f293f_image | 0 | none | 1 | 0 | 0 | 1 | 1 |
| 0012ff7358bc_image | 0 | opacity | 1 | 677.42216 | 197.97662 | 1545.21983 | 1197.75876 |
| 0012ff7358bc_image | 1 | opacity | 1 | 1792.69064 | 402.5525 | 2409.71798 | 1606.9105 |
Now we can see the domains of these values
print(f'The unique findings label values are {findings.label.unique()}')
The unique findings label values are ['opacity' 'none']
print(f'The unique findings confdence values are {findings.confident.unique()}')
The unique findings confdence values are ['1']
The labels are only none and opacity, and the confidence in the training set is always 1 (since this is a labeled dataset). All bounding box data is provided in the boxes field, this field is Nan when there are no findings (as we can see in the second row in the head of the Dataframe printed above). So in fact, all the data we need exists in the boxes field.
Thus, we can extract the findings data directly from the boxes fields and examine some of their properties.
findings = image_df.apply(lambda x: [{'id':x.name, 'finding_id': i, **box} for i, box in enumerate(literal_eval(x.boxes))] if type(x.boxes) == str else [{'id': x.name}], axis=1)
findings = pd.DataFrame.from_dict(findings.sum()).set_index('id')
print(f'Total number of findings: {findings.shape[0]}')
findings.head()
Total number of findings: 9893
| finding_id | x | y | width | height | |
|---|---|---|---|---|---|
| id | |||||
| 000a312787f2_image | 0.0 | 789.28836 | 582.43035 | 1026.65662 | 1917.30292 |
| 000a312787f2_image | 1.0 | 2245.91208 | 591.20528 | 1094.66162 | 1761.54944 |
| 000c3a3f293f_image | NaN | NaN | NaN | NaN | NaN |
| 0012ff7358bc_image | 0.0 | 677.42216 | 197.97662 | 867.79767 | 999.78214 |
| 0012ff7358bc_image | 1.0 | 1792.69064 | 402.55250 | 617.02734 | 1204.35800 |
Before further exploration of findings properties, let's explore the study-level metadata:
study_df = pd.read_csv(path/'train_study_level.csv', index_col='id')
study_df.head()
| Negative for Pneumonia | Typical Appearance | Indeterminate Appearance | Atypical Appearance | |
|---|---|---|---|---|
| id | ||||
| 00086460a852_study | 0 | 1 | 0 | 0 |
| 000c9c05fd14_study | 0 | 0 | 0 | 1 |
| 00292f8c37bd_study | 1 | 0 | 0 | 0 |
| 005057b3f880_study | 1 | 0 | 0 | 0 |
| 0051d9b12e72_study | 0 | 0 | 0 | 1 |
In this dataframe each study is classified into one of 4 classes: Negative for Pneumonia, Typical Appearance, Indeterminate Appearance, and Atypical Appearance. It is important to know how these classes are distributed over the dataset.
In the evaluation section in the competition details in Kaggle it's said that
Studies in the test set may contain more than one label. They are as follows:
negative,typical,indeterminate,atypical
Accordingly, this is a multilabel classification task.
In contrast, in a post in the competition discussion section, the hosts indicated that
Per the grading schema, chest radiographs are classified into one of four categories, which are mutually exclusive
Since the two descriptions contradict, it is worth inspecting the training set to see which labels can be assigned to an image together.
(study_df.apply(sum, axis=1) == 1).all()
True
So in the training set, each of the studies has a single label attached to it, and this classification is in fact one-hot encoded classification for each study to one of those 4 classes.
Since the results of our check on the training set supports the second post, and since inherently, by their meanings, the labels seem to be mutually exclusive , we will leave it as a single-label classification task. For convenience , we will store the labels in one row rather than in one-hot encoding format and join it with the images dataframe.
image_df.study_id += '_study'
study_labels = study_df.idxmax(axis=1).rename('study_label')
# study_labels.index = study_labels.index.str.extract('(^[^_]*)').apply(lambda x:x[0], axis=1)
image_df = image_df.merge(study_labels, left_on='study_id', right_index=True)
Next we will check how these classes are distributed over the dataset.
label_count = study_df.sum()
plt.figure(figsize=(8, 8))
plt.pie(label_count, labels=label_count.index, wedgeprops={'edgecolor': 'black'}, autopct='%1.f%%', textprops={'fontsize': 16}, explode=[.01]*4, shadow=True)
plt.title('Label Distribution', fontdict={'fontsize':22});
We have 47% certain Covid cases (typical appearance), 36% non-covid (28% negative for pneumonia and 8% atypical to covid), and 17% obscure cases. From a covid vs non-covid point of view, the dataset is quite balanced. But from the classification point of view, almost 50% of the cases are from one class and only 8% of the cases are from another class.
To better understand the distribution, let’s see that classification distribution in absolute numbers:
g = sns.barplot(x=label_count.index, y=label_count)
for label, count in zip(range(label_count.index.shape[0]), label_count):
g.text(label, count, count, color='black', ha="center", fontdict=dict(fontsize=14, weight='bold'))
Next, let's look at some properties of the findings. The number of findings varies for each image. Each of them is an opacity (or another type of the above-mentioned findings) in the CXR, and for each of them we provided the bounding box of the opacity area. Let’s look at the number of findings per image and the main statistical properties of their areas: sum, mean, max, etc.
findings['area'] = findings.width * findings.height
findings_props = findings.groupby('id').area.agg(['count','sum', 'min', 'max', 'mean', 'std'])
# findings_props.index = findings_props.index.str.extract('(^[^_]*)').apply(lambda x:x[0], axis=1)
image_props = image_df.join(findings_props)
image_props.head()
| boxes | label | study_id | study_label | count | sum | min | max | mean | std | |
|---|---|---|---|---|---|---|---|---|---|---|
| id | ||||||||||
| 000a312787f2_image | [{'x': 789.28836, 'y': 582.43035, 'width': 102... | opacity 1 789.28836 582.43035 1815.94498 2499.... | 5776db0cec75_study | Typical Appearance | 2 | 3.896712e+06 | 1.928301e+06 | 1.968412e+06 | 1.948356e+06 | 28362.881484 |
| 000c3a3f293f_image | NaN | none 1 0 0 1 1 | ff0879eb20ed_study | Negative for Pneumonia | 0 | 0.000000e+00 | NaN | NaN | NaN | NaN |
| 0012ff7358bc_image | [{'x': 677.42216, 'y': 197.97662, 'width': 867... | opacity 1 677.42216 197.97662 1545.21983 1197.... | 9d514ce429a7_study | Typical Appearance | 2 | 1.610730e+06 | 7.431218e+05 | 8.676086e+05 | 8.053652e+05 | 88025.459354 |
| 001398f4ff4f_image | [{'x': 2729, 'y': 2181.33331, 'width': 948.000... | opacity 1 2729 2181.33331 3677.00012 2785.33331 | 28dddc8559b2_study | Atypical Appearance | 1 | 5.725921e+05 | 5.725921e+05 | 5.725921e+05 | 5.725921e+05 | NaN |
| 001bd15d1891_image | [{'x': 623.23328, 'y': 1050, 'width': 714, 'he... | opacity 1 623.23328 1050 1337.23328 2156 opaci... | dfd9fdd85a3e_study | Typical Appearance | 2 | 1.531871e+06 | 7.421867e+05 | 7.896840e+05 | 7.659353e+05 | 33585.683848 |
The findings count for each class label is:
plt.figure(figsize=(10,6))
plt.gca().yaxis.grid(linestyle='--')
sns.violinplot(data=image_props, x='study_label', y='count')
plt.title('Findings Count', fontdict={'fontsize':16})
plt.show()
It is clear now that all the negative results have no findings at all, as stated in the grading method table. On the other hand, for each of the other four types it seems that there are instances with no opacity findings, contrary to these grades descriptions in the above table. But is this really the case? We saw earlier that we have more images than studied. That is, some studies have more than one image. So it seems that in some cases the prognosis is based on findings that are determined only in one of the scans. Let's verify this conclusion.
study_findings_props = image_props.groupby(['study_id'])
study_findings_count = study_findings_props['count'].agg('sum').to_frame().join(study_labels)
plt.figure(figsize=(10,6))
plt.gca().yaxis.grid(linestyle='--')
sns.violinplot(data=study_findings_count, x='study_label', y='count')
plt.title('Findings Count', fontdict={'fontsize':16})
plt.show()
It becomes clear that the above conclusion is correct. Almost all of the clear covid-19 cases have 2 findings, and a couple of them with 3 findings. The indeterminate cases also have at least one finding each, and only the non-covid cases sometimes have no findings, even when positive to pneumonia. But according to our table, No Findings means Positive to Pneumonia, so we'll put these instances aside for now.
studies_to_remove = study_df[study_df.index.isin(study_findings_props['count'].sum()[study_findings_props['count'].sum() == 0].index) &
(study_df['Negative for Pneumonia'] == 0)]
study_df['removed'] = False
study_df.loc[studies_to_remove.index, 'removed'] = True
print(f'Total number of removed rows: {study_df.loc[studies_to_remove.index].shape[0]}')
print(f'\n\nRemoved rows by label:\n')
print(study_df.loc[studies_to_remove.index].iloc[:, :-1].sum().to_string())
Total number of removed rows: 84 Removed rows by label: Negative for Pneumonia 0 Typical Appearance 1 Indeterminate Appearance 0 Atypical Appearance 83
image_props = image_props.drop(image_props.loc[image_props.study_id.isin(studies_to_remove.index)].index)
study_findings_props = image_props.groupby(['study_id'])
study_findings_count = study_findings_props['count'].agg('sum').to_frame().join(study_labels)
plt.figure(figsize=(10,6))
# plt.gca().yaxis.grid(linestyle='--')
sns.violinplot(data=study_findings_count, x='study_label', y='count')
plt.title('Findings Count', fontdict={'fontsize':16})
plt.show()
Now all the positive cases have findings. Let's inspect other findings properties:
plt.figure(figsize=(20,10))
for i, prop in enumerate(['sum', 'min', 'max', 'mean', 'std'], start=1):
plt.subplot(2, 3, i)
# plt.gca().yaxis.grid(linestyle='--')
sns.violinplot(data=image_props, x='study_label', y=prop, order=study_labels.unique())
plt.xticks(rotation=10)
title = f'Findings {prop.title()}'
if prop.title() != 'Sum': title += ' Area'
plt.title(title, fontdict={'fontsize':16})
plt.tight_layout()
plt.show()
One can see that clear covid cases strongly tend to have larger findings area. The indeterminate cases also tend to have larger findings areas than the atypical ones, but this difference is much less significant. Let's inspect these features again, but now at the study level.
study_props = (image_props[['study_id']]
.join(findings)
.groupby('study_id').area.agg(['count','sum', 'min', 'max', 'mean', 'std'])
.merge(study_labels, left_on='study_id', right_index=True))
plt.figure(figsize=(20,10))
for i, prop in enumerate(['sum', 'min', 'max', 'mean', 'std'], start=1):
plt.subplot(2, 3, i)
# plt.gca().yaxis.grid(linestyle='--')
sns.violinplot(data=study_props, x='study_label', y=prop, order=study_labels.unique(), pallete=['blue', 'green', 'green', 'red'])
plt.xticks(rotation=10)
title = f'Findings {prop.title()}'
if prop.title() != 'Sum': title += ' Area'
plt.title(title, fontdict={'fontsize':16})
plt.tight_layout()
plt.show()
It seems that there is no significant difference between the image level and study level. But, this leads us to two important questions: How many images are related to one study on average? In case that a study had more than one image, how many images the prognosis is based on?
image_df.groupby('study_id').label.count().unique()
array([1, 2, 3, 9, 4, 6, 5, 7])
print(image_df.groupby('study_id').label.agg(images_count='count').value_counts().to_string())
# g = sns.countplot(data=image_df.groupby('study_id').label.agg(images_count='count'), x='images_count')
images_count 1 5822 2 207 3 15 4 4 5 3 6 1 7 1 9 1
In most cases there’s one image per study. But in the cases with multiple images, what is the difference between the images? Is the prognosis based on all of the images?
It is straightforward to get an answer to the second password from the data. We simply count the number of images labeled with findings.
multiple_images = image_df.groupby('study_id').filter((lambda x: x.label.count() > 1))
multiple_images_with_findings = multiple_images[multiple_images.boxes.notna()]
print('Images with finding in Study Counts')
print(multiple_images_with_findings.groupby('study_id').boxes.agg(count='count').value_counts().to_string())
Images with finding in Study Counts count 1 177
So it is established that , there is never more than one image labeled with findings per study. Now it will interesting to see the difference between the images in one study. To do that, we have to pay attention to the third and most important part of our dataset - the DICOM files.
The data is provided in DICOM fromat, which is the standard in medical imaging information and related data. This format packs each medical image with related data, such as Patient Id, Name, Sex, etc. In our case, the data de-identified for privacy reasons, but we still may have important data in the metadata provided in the DICOM file. Let's pick a file and see what it looks like.
from pydicom import dcmread
def extract_id(full_id): return full_id[:full_id.index('_')]
def get_file_path(study_id, image_id, dataset='train'):
study_id, image_id = extract_id(study_id), extract_id(image_id)
return [*(path/dataset/study_id).glob(f'**/{image_id}.dcm')][0]
# return [*(path/dataset/row.study_id.str.extract("(^[^_]*)").values[0,0]).glob(f'**/{row.index.str.extract("(^[^_]*)").values[0,0]}.dcm')][0]
sample = image_df.sample(random_state=14)
fpath = get_file_path(sample.study_id.values[0], sample.index.values[0])
ds = dcmread(fpath)
# print(ds)
plt.imshow(ds.pixel_array, cmap=plt.cm.gray)
plt.colorbar()
plt.show()
Let's take a taste from our data:
# import gdcm
def show_dicoms(df, ncols=5, size=5, annotate=False):
n = df.shape[0]
nrows = int(np.ceil(n/ncols))
fig = plt.figure(figsize=(size*ncols, size*nrows))
for i, row in enumerate(df.itertuples()):
fpath = get_file_path(row.study_id, row.Index)
ds = dcmread(fpath)
plt.subplot(nrows, ncols,i+1)
plt.imshow(ds.pixel_array, cmap=plt.cm.gray)
if annotate:
if isinstance(row.boxes, str):
for box in literal_eval(row.boxes):
rect = patches.Rectangle((box['x'], box['y']),
box['width'], box['height'],
color='r', fill=False)
plt.gca().add_patch(rect)
plt.xticks([]), plt.yticks([])
plt.subplots_adjust()
return fig
# plt.show()
show_dicoms(image_df.sample(25, random_state=25))
plt.show()
Many of the images are cropped, rotated, and have different lummination level. Lungs are contained in all of the images, but the location of the lungs in the image is not constant, The images margin size are varying, and the images may contain other body parts - neck, stomach, hands, etc. To get a better understanding of the matter in hand, it will be helpful to see CXR from the different labels with the annotated bounding boxes drawn on the image.
gb = image_df.groupby('study_label')
for name, group in gb:
if not name.startswith('Neg'): group = group.loc[group.boxes.notna()]
fig = show_dicoms(group.sample(9, random_state=4), 3, 8, annotate=True)
fig.suptitle(name, fontsize=18)
plt.show()
The results are not clear for the non-expert eye. Although sometimes there is a kind of opacity in the boxes, in other cases there is no clear difference between the area inside and the area outside the box. This will affect the algorithm development and verification processes since I will not be able to rely on my own knowledge and intuition.
Now let's take a look at the metadata provided by the DICOMs. The attributes that may interest us are the body part examined, sex, image size, pixel spacing (represent the physical size of the image), modality (the scanning method), and image type. Let's extract these features to a pandas dataframe for later use.
from tqdm.notebook import tqdm
data = {}
def append_dcm_properties(row, props):
# print(row)
try:
fpath = get_file_path(row.study_id, row.Index)
# fpath = [*(path/'train'/row.study_id[:row.study_id.index("_")]).glob(f'**/{row.Index[:row.Index.index("_")]}.dcm')][0]
ds = dcmread(fpath)
data[row.Index] = {prop: getattr(ds, prop.replace(' ', '')) for prop in props}
except Exception as e:
print(f'**/{row.name[:row.name.index("_")]}.dcm')
raise
props = ['Image Type', 'Modality','Body Part Examined', 'Photometric Interpretation',
'Patient Sex', 'Imager Pixel Spacing', 'Rows', 'Columns']
# image_df.apply(lambda x: append_dcm_properties(x, props) , axis=1)
for row in tqdm(image_df.itertuples(), total=image_df.shape[0]):
append_dcm_properties(row, props)
DICOM_metadata = pd.DataFrame.from_dict(data, orient='index')
DICOM_metadata.to_csv('DICOM_metadata.csv')
dm = pd.read_csv('/kaggle/working/DICOM_metadata.csv', index_col=0)
dm.head()
| Image Type | Modality | Body Part Examined | Photometric Interpretation | Patient Sex | Imager Pixel Spacing | Rows | Columns | |
|---|---|---|---|---|---|---|---|---|
| 000a312787f2_image | ['ORIGINAL', 'PRIMARY'] | DX | CHEST | MONOCHROME2 | M | [0.1, 0.1] | 3488 | 4256 |
| 000c3a3f293f_image | ['ORIGINAL', 'PRIMARY'] | CR | CHEST | MONOCHROME2 | M | [0.15, 0.15] | 2320 | 2832 |
| 0012ff7358bc_image | ['DERIVED', 'PRIMARY'] | DX | PORT CHEST | MONOCHROME2 | F | [0.139, 0.139] | 2544 | 3056 |
| 001398f4ff4f_image | ['DERIVED', 'PRIMARY', 'POST_PROCESSED', 'RT',... | CR | CHEST | MONOCHROME1 | F | [0.1, 0.1] | 3520 | 4280 |
| 001bd15d1891_image | ['ORIGINAL', 'PRIMARY'] | DX | CHEST | MONOCHROME1 | M | [0.125, 0.125] | 2800 | 3408 |
Let's inspect the value ranges of our new data:
for column in dm[:]:
print(f'{column}:' )
print(dm[column].unique())
print('-'*100)
Image Type: ["['ORIGINAL', 'PRIMARY']" "['DERIVED', 'PRIMARY']" "['DERIVED', 'PRIMARY', 'POST_PROCESSED', 'RT', '', '', '', '', '100000']" "['DERIVED', 'SECONDARY', '', 'CSA RESAMPLED']" "['ORIGINAL', 'SECONDARY']" "['ORIGINAL', 'PRIMARY', '']" 'ORIGINAL' "['DERIVED', 'PRIMARY', '']" "['ORIGINAL', 'SECONDARY', 'ORIGINAL', 'PRIMARY', '']" "['ORIGINAL', 'PRIMARY', '', 'RAD']" "['DERIVED', 'PRIMARY', 'POST_PROCESSED', 'RT', '', '', '', '', '150000']" "['DERIVED', 'PRIMARY', 'POST_PROCESSED', '', 'RENORMALIZED', '20200405215415', '', '', '100000']" "['DERIVED', 'PRIMARY', 'POST_PROCESSED', '', '', '', '', '', '100000']" "['DERIVED', 'PRIMARY', 'POST_PROCESSED', '', 'RENORMALIZED', '20200405214833', '', '', '100000']" 'DERIVED'] ---------------------------------------------------------------------------------------------------- Modality: ['DX' 'CR'] ---------------------------------------------------------------------------------------------------- Body Part Examined: ['CHEST' 'PORT CHEST' 'TORAX' nan 'T?RAX' 'Pecho' 'THORAX' 'ABDOMEN' 'SKULL' '2- TORAX' 'TÒRAX' 'PECHO'] ---------------------------------------------------------------------------------------------------- Photometric Interpretation: ['MONOCHROME2' 'MONOCHROME1'] ---------------------------------------------------------------------------------------------------- Patient Sex: ['M' 'F'] ---------------------------------------------------------------------------------------------------- Imager Pixel Spacing: ['[0.1, 0.1]' '[0.15, 0.15]' '[0.139, 0.139]' '[0.125, 0.125]' '[0.148, 0.148]' '[0.143, 0.143]' '[0.138999998569489, 0.138999998569489]' '[0.14, 0.14]' '[0.308553, 0.308553]' '[0.2, 0.2]' '[0.175, 0.175]' '[0.0875, 0.0875]' '[0.1988, 0.1988]' '[0.2000000029802, 0.2000000029802]' '[0.160003, 0.160114]' '[0.187667, 0.187667]' '[0.1852, 0.1852]' '[0.198800, 0.198800]' '[0.194549, 0.194549]' '[0.144, 0.144]' '[0.194311, 0.194311]' '[0.194556, 0.194556]' '[0.1902, 0.1902]' '[0.194222, 0.194222]' '[0.168, 0.168]' '[0.1868, 0.1868]' '[0.194553, 0.194553]'] ---------------------------------------------------------------------------------------------------- Rows: [3488 2320 2544 3520 2800 2539 2330 2416 2336 3480 2991 2436 2536 1140 1760 3032 2400 2428 2801 2151 4020 2974 3198 3052 2012 2540 2446 2874 2382 3201 2234 2600 2910 2520 2739 2548 2292 2605 2010 2180 2593 2634 2359 2309 2781 2490 2233 2828 4240 2979 3000 2061 1962 2163 2395 2310 2963 2538 2021 2967 2778 2258 3001 1793 2290 2299 2625 2711 2466 2966 2929 2516 2760 2935 2378 2633 3020 2989 2249 2528 2954 2571 1297 3489 3995 3701 1893 2898 3044 3320 2004 1620 2876 2844 2312 2474 2518 2805 2379 2975 2527 2146 2455 1948 2315 2770 2399 2024 2323 2964 2853 2556 2696 2381 2978 1624 3013 2626 2680 2565 2167 2834 1720 2988 2551 2560 2200 1168 2772 3072 2353 2798 2661 1778 2681 2866 2858 2331 2736 3100 2500 2900 2741 2932 846 2244 2362 2621 2552 1919 2768 1795 3050 2462 2458 2156 2132 2818 2666 2278 2701 2809 2767 2620 2149 2457 2287 3005 2878 2951 3494 2488 2971 2345 1539 3408 3055 2468 3012 2514 3219 2872 2985 2430 2484 2515 2165 2444 2043 1315 2418 2285 2980 2459 2173 2169 2848 2423 2124 2762 2259 2215 2438 2392 2870 1940 3056 2933 2996 2689 2424 2475 1956 2095 2028 2607 2763 2487 2277 3934 2271 2228 2956 2512 2585 3036 2398 2645 2322 2080 2586 2793 2296 2510 2901 2554 2460 3004 2649 2311 2914 2924 3137 2615 2303 2022 2712 1928 2194 4256 2692 2493 2949 3006 3007 2937 2877 2222 2618 2193 2351 2789 2733 2377 1541 1715 2816 2726 3047 2946 1741 2417 2179 2977 2927 2523 2471 3485 2648 2752 2938 2282 2317 2948 2957 2580 2232 2405 2569 2958 3093 2158 2140 2662 1798 2453 2923 2143 2491 2366 2943 2884 2868 2631 2610 2113 2442 2667 2372 2489 2209 2744 2302 2174 1883 2464 2260 3053 2338 2103 2100 2304 2587 2449 2558 1765 2269 2972 2507 2242 3138 1834 1826 2695 2256 2779 2852 2590 4280 2470 2048 2481 2365 2328 2856 2796 2341 2064 2894 2936 2918 2450 2388 2970 2316 1627 2252 2344 2251 3642 2735 2931 2314 2797 2306 1968 2941 1974 2965 2184 2665 1887 2268 2756 2117 2177 2427 2655 2208 2947 2227 2375 2175 2374 2721 1660 2562 2969 2836 2822 2517 2115 2653 1838 1899 2636 2902 1842 2640 2776 2795 1486 2014 2211 1856 2270 2614 2284 2288 2944 2414 2482 3114 2917 2821 2198 2764 2275 2026 1995 2986 2178 1855 2017 3239 1882 2942 2786 2294 2525 1707 2961 3117 2280 3071 2532 2792 2962 2039 2522 1565 2572 2425 2694 2555 2912 2732 1280 2691 2415 2825 2598 2993 2519 2476 2267 1599 2959 3011 2498 2724 2707 1747 1987 2780 2161 2511 2995 1370 2769 3156 2627 2737 2656 2182 2882 2170 1115 2332 2916 3250 2950 2843 2857 2274 2431 2659 2090 2895 2584 2658 2628 1920 2992 2263 2746 2473 1766 2611 2849 2213 2581 2960 2704 2574 2107 3066 2305 3028 1387 2072 2495 2806 2684 2616 2526 2567 2687 2023 2833 3131 2837 2496 2813 2685 2403 2391 2243 2492 2136 4891 2860 2337 2982 2155 2402 1963 3164 2812 2183 2698 2939 3185 2334 2832 2785 3124 2245 2068 2999 2892 1994 2038 2885 2622 2429 1998 1833 2226 3151 2502 2688 2745 1723 2824 2447 3080 1934 2219 2266 1416 3046 1986 2553 2188 2005 2925 2032 2040 2298 2663 2486 2773 2166 1908] ---------------------------------------------------------------------------------------------------- Columns: [4256 2832 3056 4280 3408 3050 2846 2872 2836 4248 2992 3032 4240 3048 1387 2140 2880 2428 2802 2828 2668 4891 2986 2996 2012 2726 2840 2537 3214 2706 2980 2812 3060 2977 2446 2660 3052 2962 2623 3012 2926 2987 2990 2850 2320 3480 2972 3000 2906 2538 2798 2504 2480 2908 2698 2835 2021 1816 3001 1742 2800 2939 3007 2934 2796 3040 2695 3027 2930 2808 2740 2794 2791 2753 2866 1394 3755 3495 1665 2982 2964 2770 1517 2988 2719 2759 2359 3028 3133 2501 2979 2600 3232 2236 3043 2975 2848 2300 3165 3215 2544 2119 2854 2569 3005 2983 1739 3083 3320 3004 2878 2561 2414 2701 2618 2952 3153 2750 3072 2644 1237 2867 2430 2810 2718 2539 2814 2820 2804 2905 2911 3111 2545 3055 1353 2431 2780 2304 1844 2899 1880 2940 2954 2942 3013 3144 2758 2469 2909 2956 3020 2707 2991 4300 2948 2825 2907 2156 3019 2709 3129 2471 2826 1962 2805 2862 2920 2382 1717 2993 2518 2838 2494 2714 2490 2822 2711 2970 2622 2788 2795 2938 2648 2806 3135 2554 2816 2299 2863 2958 3249 2917 2947 2884 2179 3221 2769 2454 2220 4228 2350 2287 2830 2766 2989 2959 3031 2338 2874 2566 3539 3488 2834 2416 2553 2966 2746 2563 2961 3521 3200 2502 2022 3400 2276 2485 2710 2496 3006 2756 2472 2383 2606 2731 2949 2584 2607 2377 2858 2136 2799 2056 2837 1884 2700 2692 2971 2464 2985 2976 3125 4020 2774 2520 2978 2999 3209 2772 2302 2534 2792 2703 2743 3728 2974 2981 2893 2748 2208 2100 2341 2755 1760 3036 3008 2400 2677 2589 2690 2620 2231 2333 2955 2527 3517 1885 2580 3360 2389 2463 2673 2633 2397 2683 2366 2242 3548 2789 2651 2681 2783 2597 2913 3119 2647 3520 2451 2787 2358 2928 2616 2716 2525 2630 2386 2652 2904 2510 1958 2767 2614 2586 2498 4380 2532 3660 2508 2882 2426 2969 2417 2404 1776 2138 2877 2468 2650 2688 2308 2568 2860 2604 2765 3388 2541 2745 1542 2984 2336 2654 2492 1972 2946 3316 3160 3029 1356 2785 2014 3010 3344 2268 2596 2784 2228 2900 2807 2290 2994 2744 2330 2779 2556 2280 3849 2919 3328 2529 3656 3353 2771 1803 2655 2500 2782 2523 2588 3079 2247 2725 2870 2827 3279 2914 2742 3812 2436 2819 2646 2699 1908 2951 2473 2733 3368 2573 2778 2643 2331 2133 2680 3071 3184 2957 2963 2249 1696 3035 2649 2760 2615 2638 2898 1270 2684 2998 3216 2705 2447 3117 2483 2262 2855 2885 2432 3044 3552 2285 2319 3264 2590 2560 2845 1446 2968 3045 2945 2937 2450 1228 2950 3141 3530 2856 2499 2288 2159 3066 3152 2511 1719 2995 2549 2653 2871 2434 2892 2678 2551 3120 2326 2924 1140 2470 2418 2873 2612 2332 2685 2018 2754 2932 2876 2693 1785 3224 2380 2732 1854 2558 2517 2881 2728 2786 2735 2488 2505 3284 2852 3245 2815 3381 2738 2398 2642 2776 2224 2548 3296 2676 2720 2757 2829 1711 2853 3498 3235 3105 2670 3151 2669 3190 2657 3485 2960 1790 2918 1948 3168 2149 2010 2752 2936 2239 2237 3269 1840 2656 2664 2118 2715 2953 2682 3702 3407 2378] ----------------------------------------------------------------------------------------------------
The first thing to inspect is the sex field. How is our data splitted between the sexes? How the sex is related with the COVID19 prognoses?
sns.countplot(data=dm, x='Patient Sex');
plt.figure(figsize=(15,7))
sns.countplot(data=dm.join(image_df), x='study_label', hue='Patient Sex')
<AxesSubplot:xlabel='study_label', ylabel='count'>
We can see here the well-known fact that statistically women suffer less from Covid-19. Although our data is quite balanced with respect to sex, women suffer much less from pneumonia of any kind. In the typical covid cases (clear/severe covid cases) there are about two-thirds cases of women than men.
In Body Part Examined column, we have the unique values
'CHEST' 'PORT CHEST' 'TORAX' nan 'T?RAX' 'Pecho' 'THORAX' 'ABDOMEN' 'SKULL' '2- TORAX' 'TÒRAX' 'PECHO'
'CHEST', 'THORAX' (which exists in many versions), and 'PECHO' are all the same, whether you prefer English or Spanish. Let's try to see what else we can extract from the metadata. (Why do we have SKULLs here???)
body_parts = dm.loc[~dm['Body Part Examined'].isin(['CHEST', 'TORAX','THORAX', 'T?RAX',
'TÒRAX','PECHO', 'Pecho'])]
gb = body_parts.join(image_df).groupby('Body Part Examined')
for name, group in gb:
fig = show_dicoms(group.sample(6, random_state=4), 3, 8)
fig.suptitle(name, fontsize=18)
plt.show()
The ABDOMEN images seem to contain the body's lower part too. Besides that, there does not seem to be a significant difference between the images group (in particular, we have no SKULLs here).
Now we can inspect the difference between images in the same study.
np.random.seed(1234)
mi_study_ids = np.random.choice(multiple_images.study_id.unique(), 10)
for study_id in mi_study_ids:
imgs = image_df.loc[image_df.study_id == study_id]
fig = show_dicoms(imgs, imgs.shape[0], 3)
fig.suptitle(study_id)
It can be seen that images in the same sometimes identical or almost identical, sometimes the only difference is in the image post-processing (croping, lighting, etc.) and in few cases the study contains an additional image from aother point of view.
The evaluation at the study level could be quite simple: we only have to check the prediction accuracy.
But at the image level, the predictions will be bounding boxes. Probably the bounding boxes will not match exactly to the labeled ones, and we do not care if there are minor differences. So how will we decide whether our predictions are consistent with the labels or not?
Come to think of it, the most important thing here is how much our predictions area intersect with the ground truth labels. For ideal prediction, the predicted area will match the ground truth exactly. Meaning, the intersection and the area of each prediction and the ground truth are equals. In a more realistic case, our prediction isa bit smaller or larger than the ground truth, or span out in one direction and too short in another. In all these cases, the more the intersection area is large with respect to both the predicted area and the ground truth label area, the more we can regard the prediction as correct. This is the rationale behind the PASCAL VOC2010 IoU (Intersection over Union) evaluation method, which is used in this competition: a bounding box prediction is considered correct if the rate of the intersection over union of the prediction and the ground truth is greater than $0.5$. i.e, we demand
$$IoU = \frac{A_y \cap A_\hat{y}}{A_y \cup A_\hat{y}} > 0.5$$
Where $A_y$ is the ground truth bounding box area and $A_\hat{y}$ is the area of the predicted box.
But deciding whether a predicted bounding box matches to a target bounding box is not enough. Since there is varing number of bounding boxes in each image. So we have to find a way to evaluate how far the predicted set of bounding boxes from the targets. Of course, we could regard each of redundant bounding box as a false prediction, and than calculate the accuracy over all the predictions. But this way has a sgnificant drawbacks: In this way, must predict exactly the target labels, at 100% confidence. Unlike regular classification task, where the model gives scores for each class so we can interpret not only the chosen class, but also the probablities for all other classes, here the model cannot estimate different options. The model have to predict only the most cofident boxes. But somtimes we want to get a wider view. For example, we may want to minimize the false negative detections by take in account all the bounding boxes that may contain specific class under some confidence threshold. For this reason, VOC dedmonstrate a dedicated metric called Mean Avererage Precision, or mAP. The main idea behind this metric is to consider first each of the classes to be detected seperatle, as binray classification against all other classes. Then we have to calculate te average precision of this binary classification. In average precision we sort the prediction by their confidence score. Now for a given recall $r$, check which confidence threshold we have to set in order to get this recall, and then we calculate the precision of the prediction above this threshold. The average precision is the average precison over the recall segment $[0, 1]$. The mean average precision then calculate as the mean of the average precision of each class.
In order to use the same evaluation method for the study label and for the image level, the competiton hosts decided to use this method also for the classification method. This means that instead of predicting one class for each study case, we have to give a score for each of classes. The main meaning of this is that the predictions of each class are not dependent.
In this project I choose fastai as the main library. Fastai is a framework built upon Pytorch and provides easy and fast way to build ANN models.
As a baseline model we will take the most simple resnet, trained as recommanded as base standard by fastai - using the largest batch size fisible. Fastai also recommended to use their LR finder to determind the learning rate, but my experiments in this dataset show that using Adam (fastai default optimizer) and one cycle policy, anything in the range $1e-2$ - $1e-3$ works well, and this is more or less the value given by the LR finder. Thus, I set the One Cycle max_lr parameter to $1e-3$ for all the following experiments.
For the baseline we want to keep the model and the training and prediction process fast and simple in terms of computational resources, so we will resize the images resolution to 256$\times$256.
At first, let's install the latest version of fastai.
%pip install fastai --upgrade 1>/dev/null
For training a model, it's much more convient and efficient to convert the DICOM files to JPEG. This will also allow us to work on Colab kernels instead of kaggle kernel, which allow free GPU usage and more convient environment. Some kagglers already did it and created several datasets with JPEGs from different resolutions. For the training process we will use a set of dataset created by a kaggler called Awsaf who created three dataset for resolutions of 256, 512, 1024.
Here I attached the colab to my google drive on order to copy my kaggle credentials and downloaded those dataset:
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
!pip uninstall kaggle -y
!pip install kaggle
!mkdir -p /root/.kaggle/
!cp /content/gdrive/MyDrive/kaggle/kaggle.json /root/.kaggle/
for size in [256, 512, 1024]:
!kaggle datasets download -d awsaf49/siimcovid19-$size-jpg-image-dataset
!unzip -o siimcovid19-$size-jpg-image-dataset.zip -d jpeg-$size/ 1>/dev/null
Mounted at /content/gdrive
Found existing installation: kaggle 1.5.12
Uninstalling kaggle-1.5.12:
Successfully uninstalled kaggle-1.5.12
Collecting kaggle
Downloading kaggle-1.5.12.tar.gz (58 kB)
|████████████████████████████████| 58 kB 4.3 MB/s
Requirement already satisfied: six>=1.10 in /usr/local/lib/python3.7/dist-packages (from kaggle) (1.15.0)
Requirement already satisfied: certifi in /usr/local/lib/python3.7/dist-packages (from kaggle) (2021.5.30)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.7/dist-packages (from kaggle) (2.8.2)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from kaggle) (2.23.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from kaggle) (4.62.0)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.7/dist-packages (from kaggle) (5.0.2)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from kaggle) (1.24.3)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.7/dist-packages (from python-slugify->kaggle) (1.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->kaggle) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->kaggle) (2.10)
Building wheels for collected packages: kaggle
Building wheel for kaggle (setup.py) ... done
Created wheel for kaggle: filename=kaggle-1.5.12-py3-none-any.whl size=73051 sha256=e846db162749aa8f560f948c1002873a614079446ce8178339938ca028df5ba5
Stored in directory: /root/.cache/pip/wheels/62/d6/58/5853130f941e75b2177d281eb7e44b4a98ed46dd155f556dc5
Successfully built kaggle
Installing collected packages: kaggle
Successfully installed kaggle-1.5.12
Downloading siimcovid19-256-jpg-image-dataset.zip to /content
92% 121M/132M [00:01<00:00, 117MB/s]
100% 132M/132M [00:01<00:00, 122MB/s]
Downloading siimcovid19-512-jpg-image-dataset.zip to /content
95% 417M/439M [00:03<00:00, 141MB/s]
100% 439M/439M [00:03<00:00, 138MB/s]
Downloading siimcovid19-1024-jpg-image-dataset.zip to /content
99% 1.46G/1.48G [00:11<00:00, 136MB/s]
100% 1.48G/1.48G [00:11<00:00, 139MB/s]
Beside the imports, we have here the random_seed function I took from kaggle, which sets the random seed to python random functions, numpy, torch and cuda. Using this function with constant seed is very useful for the reproducibility of thק experiments.
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from pathlib import Path
from joblib import Parallel, delayed
from ast import literal_eval
import PIL
from tqdm.auto import tqdm
from matplotlib import pyplot as plt
import matplotlib.patches as patches
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
def random_seed(seed_value):
import random
random.seed(seed_value) # Python
import numpy as np
np.random.seed(seed_value) # cpu vars
import torch
torch.manual_seed(seed_value) # cpu vars
if torch.cuda.is_available():
torch.cuda.manual_seed(seed_value)
torch.cuda.manual_seed_all(seed_value) # gpu vars
torch.backends.cudnn.deterministic = True #needed
torch.backends.cudnn.benchmark = False
From now we will use the train.csv file that comes with the jpeg files dataset, which maps each image id to its study id, boxes, and the classes columns for this study from the study level file. We will merge the DICOM_meta dataframe we have created in the EDA to this table, so we will have all the necessary data in one table.
seed = 850
path = Path('/content/jpeg-256/')
train_df = pd.read_csv(path/'train.csv')
train_df['study_label'] = train_df[['Negative for Pneumonia', 'Typical Appearance', 'Indeterminate Appearance', 'Atypical Appearance']].idxmax(1)
train_df['image_fn'] = train_df['image_id'] + '.jpg'
dm = pd.read_csv('/content/gdrive/MyDrive/covid19-detection/data/DICOM_metadata.csv', index_col=0)
dm.index = dm.index.str.extract('([^_]*)_')
dm.index = dm.index.map(lambda x : x[0])
train_df = train_df.merge(dm, left_on='image_id', right_index=True)
train_df.head()
| boxes | label | StudyInstanceUID | image_id | Negative for Pneumonia | Typical Appearance | Indeterminate Appearance | Atypical Appearance | filepath | study_label | image_fn | Image Type | Modality | Body Part Examined | Photometric Interpretation | Patient Sex | Imager Pixel Spacing | Rows | Columns | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | [{'x': 789.28836, 'y': 582.43035, 'width': 102... | opacity 1 789.28836 582.43035 1815.94498 2499.... | 5776db0cec75 | 000a312787f2 | 0 | 1 | 0 | 0 | /kaggle/input/siim-covid19-detection/train/577... | Typical Appearance | 000a312787f2.jpg | ['ORIGINAL', 'PRIMARY'] | DX | CHEST | MONOCHROME2 | M | [0.1, 0.1] | 3488 | 4256 |
| 1 | NaN | none 1 0 0 1 1 | ff0879eb20ed | 000c3a3f293f | 1 | 0 | 0 | 0 | /kaggle/input/siim-covid19-detection/train/ff0... | Negative for Pneumonia | 000c3a3f293f.jpg | ['ORIGINAL', 'PRIMARY'] | CR | CHEST | MONOCHROME2 | M | [0.15, 0.15] | 2320 | 2832 |
| 2 | [{'x': 677.42216, 'y': 197.97662, 'width': 867... | opacity 1 677.42216 197.97662 1545.21983 1197.... | 9d514ce429a7 | 0012ff7358bc | 0 | 1 | 0 | 0 | /kaggle/input/siim-covid19-detection/train/9d5... | Typical Appearance | 0012ff7358bc.jpg | ['DERIVED', 'PRIMARY'] | DX | PORT CHEST | MONOCHROME2 | F | [0.139, 0.139] | 2544 | 3056 |
| 3 | [{'x': 2729, 'y': 2181.33331, 'width': 948.000... | opacity 1 2729 2181.33331 3677.00012 2785.33331 | 28dddc8559b2 | 001398f4ff4f | 0 | 0 | 0 | 1 | /kaggle/input/siim-covid19-detection/train/28d... | Atypical Appearance | 001398f4ff4f.jpg | ['DERIVED', 'PRIMARY', 'POST_PROCESSED', 'RT',... | CR | CHEST | MONOCHROME1 | F | [0.1, 0.1] | 3520 | 4280 |
| 4 | [{'x': 623.23328, 'y': 1050, 'width': 714, 'he... | opacity 1 623.23328 1050 1337.23328 2156 opaci... | dfd9fdd85a3e | 001bd15d1891 | 0 | 1 | 0 | 0 | /kaggle/input/siim-covid19-detection/train/dfd... | Typical Appearance | 001bd15d1891.jpg | ['ORIGINAL', 'PRIMARY'] | DX | CHEST | MONOCHROME1 | M | [0.125, 0.125] | 2800 | 3408 |
First we have to split our data to training and validation sets. Fastai porvides their own splitter in their dataloaders creator, but their splitter is not stratified, and that is important especialy in such imbalanced data, so we will use the sklearn splitter. Another reason to use sklearn is that we have in some cases several images in single study. These images are highly correlated and sometimes almost identical, so to prevent data leakage we want to split the data according the study id so that all the images of same study will be in the same split.
from sklearn.model_selection import train_test_split
train_rows, val_rows = train_test_split(train_df.StudyInstanceUID, train_size=.8, stratify=train_df.study_label, random_state=seed)
train_df['valid'] = train_df.StudyInstanceUID.isin(val_rows)
The first thing to do is creating dataloaders to load the training and validation data into the model. For now, the only transform we want to apply on the data is resizing the images to 256$\times$256. To save time and resources, we will use here pre-resized images instead of insert the resizing into the dataloader pipeline. Since we want to use imagenet pretrained weights, we need also to normalize the data to imagenet stats, but fastai will take care of this automatilcally.
from fastai.vision.all import *
dls = ImageDataLoaders.from_df(train_df, path/'train', fn_col='image_fn', label_col='study_label', bs=128, seed=seed,
valid_col='valid')
dls.show_batch()
Once we have the dataloaders, we can create and train the baseline model.
metrics = [accuracy]
learn = cnn_learner(dls, resnet18, metrics=metrics)
Downloading: " " to /root/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
We will run here LR finder to show that it indeed found a value in the range $(1e-2, 1e-3)$ as described above.
random_seed(seed)
learn.lr_find()
SuggestedLRs(valley=0.001737800776027143)
lr = 1e-3
random_seed(seed)
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.136939 | 1.627712 | 0.363703 | 00:13 |
| 1 | 1.839005 | 1.277387 | 0.551065 | 00:13 |
| 2 | 1.568370 | 1.193293 | 0.554004 | 00:13 |
| 3 | 1.328974 | 1.166894 | 0.556943 | 00:13 |
| 4 | 1.146452 | 1.170350 | 0.570904 | 00:13 |
| 5 | 1.003107 | 1.156454 | 0.575312 | 00:13 |
| 6 | 0.913112 | 1.152748 | 0.578251 | 00:13 |
| 7 | 0.817919 | 1.158333 | 0.581190 | 00:13 |
| 8 | 0.776223 | 1.159912 | 0.585599 | 00:13 |
| 9 | 0.751702 | 1.157146 | 0.587803 | 00:13 |
learn.recorder.plot_loss(skip_start=False)
The model achive about 0.6 accuracy on the first few epochs, and then stop to progress. Looking as the train loss against the validation loss in the training log and in the above plot, we can see that the model starts to overfit from the 4-5th epoch.
To prevent the overfitting we will apply data augmentation transforms. Fastai provides aug_transforms method which unite handfull of image transformations like roations, flips, zooming in and out and others. From the images printed above, we can see that some of the images are rotated by about 15 degrees, so we wil set the rotation max degrees to 15. The other defaults of the mehtod aug_transforms seems to be appropriate for our data.
Here the new dataloader, configured with the augmentation transforms:
dls = ImageDataLoaders.from_df(train_df, path/'train', fn_col='image_fn', label_col='study_label', bs=128, seed=seed, valid_col='valid',
batch_tfms=aug_transforms(max_rotate=15))
dls.show_batch()
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py:1023: UserWarning: torch.solve is deprecated in favor of torch.linalg.solveand will be removed in a future PyTorch release. torch.linalg.solve has its arguments reversed and does not return the LU factorization. To get the LU factorization see torch.lu, which can be used with torch.lu_solve or torch.lu_unpack. X = torch.solve(B, A).solution should be replaced with X = torch.linalg.solve(A, B) (Triggered internally at /pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:760.) ret = func(*args, **kwargs)
We will use now this dataloaders to retrain the model:
learn = cnn_learner(dls, resnet18, metrics=metrics)
random_seed(seed)
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.172095 | 1.634977 | 0.389420 | 00:14 |
| 1 | 1.880736 | 1.537730 | 0.493755 | 00:14 |
| 2 | 1.684923 | 1.188444 | 0.566495 | 00:14 |
| 3 | 1.498345 | 1.184727 | 0.573843 | 00:14 |
| 4 | 1.359698 | 1.114470 | 0.578251 | 00:14 |
| 5 | 1.257719 | 1.068544 | 0.606172 | 00:14 |
| 6 | 1.181332 | 1.074996 | 0.596620 | 00:14 |
| 7 | 1.130962 | 1.051476 | 0.600294 | 00:14 |
| 8 | 1.100617 | 1.057825 | 0.598824 | 00:14 |
| 9 | 1.084826 | 1.054806 | 0.602498 | 00:14 |
The augmentations prevented the overfitting and the result imporved. But at the last epoch the training loss get very close to the validation loss, so that the overfitting is still one or two epochs ahead. Below the model is trained for more 3 epochs, so we can see it overfits.
learn.fit_one_cycle(3)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.074882 | 1.093471 | 0.598090 | 00:14 |
| 1 | 1.059054 | 1.037730 | 0.610580 | 00:14 |
| 2 | 1.027459 | 1.033620 | 0.603233 | 00:14 |
We need a stronger augemtations in order to be able to train more epochs:
dls = ImageDataLoaders.from_df(train_df, path/'train', fn_col='image_fn', label_col='study_label', bs=128, seed=seed, valid_col='valid',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
dls.show_batch()
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py:1023: UserWarning: torch.solve is deprecated in favor of torch.linalg.solveand will be removed in a future PyTorch release. torch.linalg.solve has its arguments reversed and does not return the LU factorization. To get the LU factorization see torch.lu, which can be used with torch.lu_solve or torch.lu_unpack. X = torch.solve(B, A).solution should be replaced with X = torch.linalg.solve(A, B) (Triggered internally at /pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:760.) ret = func(*args, **kwargs)
Now the transforms are much stronger. Let's try this.
learn = cnn_learner(dls, resnet18, metrics=metrics)
random_seed(seed)
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.279873 | 1.854448 | 0.337252 | 00:15 |
| 1 | 1.963356 | 1.389864 | 0.507715 | 00:15 |
| 2 | 1.760644 | 1.269570 | 0.557678 | 00:15 |
| 3 | 1.584370 | 1.156144 | 0.598090 | 00:15 |
| 4 | 1.448000 | 1.114058 | 0.598090 | 00:15 |
| 5 | 1.346394 | 1.093533 | 0.586334 | 00:15 |
| 6 | 1.275903 | 1.077571 | 0.594416 | 00:15 |
| 7 | 1.215048 | 1.052689 | 0.598090 | 00:15 |
| 8 | 1.188088 | 1.052382 | 0.600294 | 00:15 |
| 9 | 1.160785 | 1.050861 | 0.601029 | 00:15 |
Much better. Now we can continue the training.
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.153651 | 1.029901 | 0.600294 | 00:15 |
| 1 | 1.134104 | 1.048217 | 0.601029 | 00:15 |
| 2 | 1.123916 | 1.013362 | 0.610580 | 00:15 |
| 3 | 1.102736 | 1.002486 | 0.604702 | 00:15 |
| 4 | 1.067288 | 0.990958 | 0.606172 | 00:15 |
| 5 | 1.040623 | 1.012678 | 0.602498 | 00:15 |
| 6 | 1.027421 | 0.977302 | 0.623071 | 00:15 |
| 7 | 1.012243 | 0.981503 | 0.618663 | 00:15 |
| 8 | 1.004729 | 0.983442 | 0.617193 | 00:15 |
| 9 | 0.994905 | 0.986293 | 0.618663 | 00:15 |
We achived a stable result of ~ .62 accuracy on last epochs.
Let's now try to add different type of augmentation - mixup augmentation. In mixup augmentaion we mixing in some probablity two images from the batch. Of course, we need also to mix their labels as well - if we took .75 from negative image and .25 from typical image, we need to label the mixed image as .75 negative and .25 typical. From technical reasons, in fastai this method implemented as a callback in the Learner object.
We will also add a save model callback to save the best accurate model during this training.
learn = cnn_learner(dls, resnet18, metrics=metrics, cbs=[MixUp(), SaveModelCallback('accuracy', fname='resnet18', reset_on_fit=False)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/')
random_seed(seed)
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.221637 | 1.648123 | 0.347539 | 00:14 |
| 1 | 2.033317 | 1.310811 | 0.484203 | 00:14 |
| 2 | 1.797587 | 1.217643 | 0.560617 | 00:14 |
| 3 | 1.631773 | 1.133407 | 0.574578 | 00:14 |
| 4 | 1.518408 | 1.117549 | 0.579721 | 00:14 |
| 5 | 1.419377 | 1.069009 | 0.592212 | 00:14 |
| 6 | 1.337978 | 1.061508 | 0.585599 | 00:14 |
| 7 | 1.286495 | 1.047750 | 0.610580 | 00:14 |
| 8 | 1.254207 | 1.040178 | 0.610580 | 00:14 |
| 9 | 1.244091 | 1.039126 | 0.608376 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.347538560628891. Better model found at epoch 1 with accuracy value: 0.4842028021812439. Better model found at epoch 2 with accuracy value: 0.560617208480835. Better model found at epoch 3 with accuracy value: 0.5745775103569031. Better model found at epoch 4 with accuracy value: 0.5797207951545715. Better model found at epoch 5 with accuracy value: 0.5922116041183472. Better model found at epoch 7 with accuracy value: 0.6105804443359375.
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.225691 | 1.043782 | 0.606907 | 00:14 |
| 1 | 1.224334 | 1.050711 | 0.610580 | 00:14 |
| 2 | 1.214372 | 1.060318 | 0.579721 | 00:14 |
| 3 | 1.181578 | 1.040388 | 0.600294 | 00:14 |
| 4 | 1.152836 | 1.014057 | 0.625276 | 00:14 |
| 5 | 1.132918 | 1.014307 | 0.621602 | 00:14 |
| 6 | 1.121463 | 0.999287 | 0.617928 | 00:14 |
| 7 | 1.111305 | 0.991048 | 0.623806 | 00:14 |
| 8 | 1.103899 | 0.987156 | 0.627480 | 00:14 |
| 9 | 1.096164 | 0.988859 | 0.623071 | 00:14 |
Better model found at epoch 4 with accuracy value: 0.6252755522727966. Better model found at epoch 8 with accuracy value: 0.6274797916412354.
We still far enough from overfitting, so we can continue.
learn.fit_one_cycle(20, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.091953 | 0.991541 | 0.620132 | 00:14 |
| 1 | 1.098029 | 0.999595 | 0.619398 | 00:14 |
| 2 | 1.082786 | 0.994576 | 0.621602 | 00:14 |
| 3 | 1.092197 | 0.994829 | 0.623806 | 00:14 |
| 4 | 1.088647 | 1.035866 | 0.598824 | 00:14 |
| 5 | 1.087988 | 1.007794 | 0.617193 | 00:14 |
| 6 | 1.078389 | 0.978019 | 0.618663 | 00:14 |
| 7 | 1.077155 | 0.976289 | 0.623071 | 00:14 |
| 8 | 1.073659 | 0.987944 | 0.620132 | 00:14 |
| 9 | 1.064534 | 0.990782 | 0.621602 | 00:14 |
| 10 | 1.057874 | 0.966611 | 0.626745 | 00:14 |
| 11 | 1.051780 | 0.981629 | 0.623071 | 00:14 |
| 12 | 1.051828 | 0.967710 | 0.636297 | 00:14 |
| 13 | 1.051160 | 0.978473 | 0.614989 | 00:14 |
Better model found at epoch 12 with accuracy value: 0.6362968683242798.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.091953 | 0.991541 | 0.620132 | 00:14 |
| 1 | 1.098029 | 0.999595 | 0.619398 | 00:14 |
| 2 | 1.082786 | 0.994576 | 0.621602 | 00:14 |
| 3 | 1.092197 | 0.994829 | 0.623806 | 00:14 |
| 4 | 1.088647 | 1.035866 | 0.598824 | 00:14 |
| 5 | 1.087988 | 1.007794 | 0.617193 | 00:14 |
| 6 | 1.078389 | 0.978019 | 0.618663 | 00:14 |
| 7 | 1.077155 | 0.976289 | 0.623071 | 00:14 |
| 8 | 1.073659 | 0.987944 | 0.620132 | 00:14 |
| 9 | 1.064534 | 0.990782 | 0.621602 | 00:14 |
| 10 | 1.057874 | 0.966611 | 0.626745 | 00:14 |
| 11 | 1.051780 | 0.981629 | 0.623071 | 00:14 |
| 12 | 1.051828 | 0.967710 | 0.636297 | 00:14 |
| 13 | 1.051160 | 0.978473 | 0.614989 | 00:14 |
| 14 | 1.047323 | 0.969387 | 0.627480 | 00:14 |
| 15 | 1.043161 | 0.972942 | 0.623806 | 00:14 |
| 16 | 1.033725 | 0.966448 | 0.628949 | 00:14 |
| 17 | 1.038975 | 0.969765 | 0.626745 | 00:14 |
| 18 | 1.036290 | 0.968932 | 0.628949 | 00:14 |
| 19 | 1.039456 | 0.971199 | 0.626010 | 00:14 |
Seems that we got the best we can achieve from this model.
Now we have a model we can export and make a submission on kaggle for the competition. The score of the model will be less then it's accuracy, mainly because in the competition we have to predict opacities bounding boxes on the image level, and we didn't develop it yet. But the score still can be used to compare between models.
Let's load the best model and export it.
learn.load('resnet18')
learn.export('resnet18.pkl')
/usr/local/lib/python3.7/dist-packages/fastai/learner.py:56: UserWarning: Saved filed doesn't contain an optimizer state.
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
The submission of this model got score of 0.322 public leaderboard.
This score makes sense, since about 50% of the score come from the image level, so score of 0.32 match to 0.63 accuracy of the model.
Let's try now deeper versions of resnet. The next resnet version on pytorch model zoo is resnet34. Using this architecture will force us to reduce the batch size, because the 128 batch size will overflow the Colab GPU memory capacity for this architecture.
dls34 = ImageDataLoaders.from_df(train_df, path/'train', fn_col='image_fn', label_col='study_label', bs=64, seed=seed, valid_col='valid',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
learn34 = cnn_learner(dls34, resnet34, metrics=metrics, cbs=[MixUp()])
random_seed(seed)
learn34.fit_one_cycle(10, lr)
learn34.fit_one_cycle(10, lr)
learn34.fit_one_cycle(10, lr)
Downloading: " ; to /root/.cache/torch/hub/checkpoints/resnet34-b627a593.pth
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.234765 | 1.504851 | 0.443791 | 00:25 |
| 1 | 1.919579 | 1.372733 | 0.498898 | 00:25 |
| 2 | 1.623830 | 1.188850 | 0.560617 | 00:25 |
| 3 | 1.420471 | 1.089587 | 0.582660 | 00:25 |
| 4 | 1.277785 | 1.084268 | 0.586334 | 00:25 |
| 5 | 1.227922 | 1.085220 | 0.578986 | 00:25 |
| 6 | 1.163226 | 1.076400 | 0.598090 | 00:25 |
| 7 | 1.144382 | 1.058939 | 0.601763 | 00:25 |
| 8 | 1.124791 | 1.058929 | 0.592946 | 00:25 |
| 9 | 1.120825 | 1.044490 | 0.597355 | 00:25 |
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.132079 | 1.062200 | 0.590007 | 00:25 |
| 1 | 1.119061 | 1.082659 | 0.592212 | 00:25 |
| 2 | 1.131198 | 1.113467 | 0.565760 | 00:25 |
| 3 | 1.122201 | 1.122275 | 0.565760 | 00:25 |
| 4 | 1.094373 | 1.031081 | 0.603968 | 00:25 |
| 5 | 1.078759 | 1.060769 | 0.595885 | 00:25 |
| 6 | 1.076960 | 1.026844 | 0.603233 | 00:25 |
| 7 | 1.065849 | 1.019088 | 0.601763 | 00:25 |
| 8 | 1.058897 | 1.031228 | 0.597355 | 00:25 |
| 9 | 1.059524 | 1.028008 | 0.599559 | 00:25 |
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.060321 | 1.028187 | 0.595151 | 00:25 |
| 1 | 1.063604 | 1.044466 | 0.596620 | 00:25 |
| 2 | 1.055629 | 1.075485 | 0.587803 | 00:25 |
| 3 | 1.069100 | 1.013106 | 0.603968 | 00:25 |
| 4 | 1.056794 | 1.013758 | 0.597355 | 00:25 |
| 5 | 1.057729 | 1.017473 | 0.604702 | 00:25 |
| 6 | 1.047425 | 0.990069 | 0.611315 | 00:25 |
| 7 | 1.043572 | 1.028464 | 0.600294 | 00:25 |
| 8 | 1.031379 | 1.028275 | 0.598824 | 00:25 |
| 9 | 1.031109 | 1.035032 | 0.597355 | 00:25 |
The deeper model did worse than the simpler, maybe because the smaller batch size. I tried to train deeper versions of resnet than resnet34 with different learning rates and batch size and for different number of epochs, but all the experiments give even worth results, I cannot find any reason.
Although not represented here, experiments with other architectures like densenet and efficientnet gave similar results to resnet18 and resnet34. On kaggle I found A training notebook for Efficientnet B7 on TPU with accuracy of about 0.65, but when I tried to replicate the training on Google Colab I found that for colab TPU I have to reduce the batch size by half, what cause drop in the accuracy again to something about 0.62-0.63
We started with 256 image size, but maybe the model can give more accurate results with larger image size. . When increasing the resolution we again have to reduce the batch size to match the GPU mermory capacity. So we will set the bach size to 64 for 512 image size and to 16 for 1024 image size.
dls512 = ImageDataLoaders.from_df(train_df, '/content/jpeg-512/train', fn_col='image_fn', label_col='study_label', bs=64, seed=seed, valid_col='valid',
batc_ftms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.4
))
learn512 = cnn_learner(dls512, resnet18, metrics=[accuracy], cbs=[MixUp])
random_seed(seed)
learn512.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.072881 | 1.505724 | 0.419544 | 00:50 |
| 1 | 1.788971 | 1.194337 | 0.570904 | 00:50 |
| 2 | 1.502423 | 1.119591 | 0.584129 | 00:50 |
| 3 | 1.307556 | 1.115712 | 0.565026 | 00:50 |
| 4 | 1.171192 | 1.050063 | 0.597355 | 00:50 |
| 5 | 1.095508 | 1.047778 | 0.601029 | 00:50 |
| 6 | 1.049296 | 1.032640 | 0.598824 | 00:50 |
| 7 | 1.018584 | 1.024428 | 0.602498 | 00:50 |
| 8 | 0.995111 | 1.019536 | 0.603968 | 00:50 |
| 9 | 0.983949 | 1.020884 | 0.606172 | 00:50 |
learn_512.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 0.990704 | 1.017819 | 0.607641 | 00:50 |
| 1 | 0.992012 | 1.045582 | 0.588538 | 00:50 |
| 2 | 1.014551 | 1.000568 | 0.619398 | 00:50 |
| 3 | 1.000638 | 1.019743 | 0.614254 | 00:50 |
| 4 | 0.973878 | 1.034835 | 0.600294 | 00:50 |
| 5 | 0.952693 | 1.013431 | 0.602498 | 00:50 |
| 6 | 0.915152 | 1.006195 | 0.605437 | 00:50 |
| 7 | 0.901263 | 1.011025 | 0.615724 | 00:50 |
| 8 | 0.875014 | 1.005544 | 0.620132 | 00:50 |
| 9 | 0.871610 | 1.006065 | 0.615724 | 00:50 |
dls_1024 = ImageDataLoaders.from_df(train_df, '/content/jpeg-1024/train', fn_col='image_fn', label_col='study_label', bs=16, seed=seed, valid_col='valid',
batc_ftms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.4
))
random_seed(seed)
learn_1024 = cnn_learner(dls_1024, resnet18, metrics=[accuracy], cbs=[MixUp])
learn_1024.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.900838 | 1.355854 | 0.487877 | 03:22 |
| 1 | 1.506670 | 1.193257 | 0.560617 | 03:22 |
| 2 | 1.269001 | 1.109861 | 0.563556 | 03:22 |
| 3 | 1.140289 | 1.043220 | 0.595885 | 03:22 |
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.900838 | 1.355854 | 0.487877 | 03:22 |
| 1 | 1.506670 | 1.193257 | 0.560617 | 03:22 |
| 2 | 1.269001 | 1.109861 | 0.563556 | 03:22 |
| 3 | 1.140289 | 1.043220 | 0.595885 | 03:22 |
| 4 | 1.107860 | 1.054133 | 0.599559 | 03:22 |
| 5 | 1.049287 | 1.002251 | 0.622337 | 03:22 |
| 6 | 1.045195 | 0.980898 | 0.628949 | 03:22 |
| 7 | 1.034131 | 0.971386 | 0.630419 | 03:22 |
| 8 | 0.981983 | 0.976748 | 0.631888 | 03:22 |
| 9 | 0.961620 | 0.976762 | 0.631888 | 03:22 |
learn_1024.fit_one_cycle(10, lr)
The increasing of the image resolution doesn't improve the model accuracy significantly. We will stay with resnet18 and image size of 256
Let's now examine the model results.
The accuracy is about 0.6. But we have 4 classes, so for better understanding the accuracy performance let's see the confusion matrix.
learn.load('resnet18')
intrep = ClassificationInterpretation.from_learner(learn)
intrep.plot_confusion_matrix(True, figsize=(7,7))
intrep.print_classification_report()
precision recall f1-score support
Atypical Appearance 0.80 0.04 0.08 100
Indeterminate Appearance 0.40 0.05 0.08 254
Negative for Pneumonia 0.56 0.87 0.68 357
Typical Appearance 0.70 0.83 0.76 650
accuracy 0.64 1361
macro avg 0.61 0.45 0.40 1361
weighted avg 0.61 0.64 0.56 1361
The model accuracy on Typical and Negative classes is quite good, but on Indeterminate and on Atypical the accuracy is very poor. The recall is very low, meaning that the model in fact cannot determine atypical and inderminate cases at all.
Let's look at the results:
learn.show_results()
These results don't mean anything to me, since I cannot read those CXRs. Ploting the opacity bounding boxes on the XRays may help to understand the results.
results_items = train_df[train_df.valid]#dls.valid.items.sample(30)
res_dl = dls.test_dl(results_items)
_,_,preds = learn.get_preds(dl=res_dl, with_decoded=True)
fig, ctxs = plt.subplots(30, 3, figsize=(9,90))
fig.suptitle('Results (predictions:targets)', size=18)
for ax, pred, (index, item) in zip(ctxs.flatten(), preds, results_items.iterrows()):
ax.imshow(PIL.Image.open(path/'train'/item.image_fn))
ax.set_axis_off()
pred_class = dls.vocab[pred]
color = 'b' if pred_class == item.study_label else 'r'
ax.set_title(f'{pred_class}\n{item.study_label}', fontdict=dict(color=color))
if isinstance(item.boxes, str):
for box in literal_eval(item.boxes):
w_scale = 256/item.Columns
h_scale = 256/item.Rows
rect = patches.Rectangle((box['x']*w_scale, box['y']*h_scale),
box['width']*w_scale, box['height']*h_scale,
color=color, linewidth=1, fill=False)
ax.add_patch(rect)
# learn.show_results(dl=res_dl, ctxs=ctxs.flatten(), max_n=30)
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
Generally speaking, we can see that when the opaciy area are small or asymmetric, the model tends to missclassify the image.
To better understanding the model's desicions, I tried to use Grad-Cam method, in which we comuting the gradient of the loss w.r.t the last convolutional layer and upsampling it to the original image size to obtain a heatmap that can show us the importance of ech part of the image in the model decision. Here the gard-cam heatmaps from 5 samples for each study class, with the finding bounding boxes plotted too. The predicted classes ploted above the images, red-colored for wrong classification.
class Hook():
def __init__(self, m):
self.hook = m.register_forward_hook(self.hook_func)
def hook_func(self, m, i, o): self.stored = o.detach().clone()
def __enter__(self, *args): return self
def __exit__(self, *args): self.hook.remove()
class HookBwd():
def __init__(self, m):
self.hook = m.register_backward_hook(self.hook_func)
def hook_func(self, m, gi, go): self.stored = go[0].detach().clone()
def __enter__(self, *args): return self
def __exit__(self, *args): self.hook.remove()
for cls in ['Atypical Appearance', 'Indeterminate Appearance', 'Negative for Pneumonia', 'Typical Appearance']:
samples = train_df.loc[train_df.valid & (train_df.study_label==cls)].sample(3, random_state=123)
figure ,axes = plt.subplots(1, 6, figsize=(30,5))
axes = axes.reshape((3,2))
figure.suptitle(cls)
for (idx, item), (org_ax, ax) in zip(samples.iterrows(), axes):
img = PILImage.create(path/'train'/item.image_fn)
org_ax.imshow(img)
org_ax.set_title('Original Image')
org_ax.set_axis_off()
x, = first(dls.test_dl([img]))
cls_idx = dls.vocab.o2i[cls]
learn.model.to('cuda')
with HookBwd(learn.model[0]) as hookg:
with Hook(learn.model[0]) as hook:
output = learn.model.eval()(x.cuda())
act = hook.stored
predicted_class = dls.vocab[output.detach().cpu().numpy().argmax()]
output[0,cls_idx].backward()
grad = hookg.stored
w = grad[0].mean(dim=[1,2], keepdim=True)
cam_map = (w * act[0]).sum(0)
x_dec = TensorImage(dls.train.decode((x,))[0][0])
x_dec.show(ctx=ax)
color = 'b' if cls == predicted_class else 'r'
ax.set_title('Predicted As: '+ predicted_class, color=color)
ax.imshow(cam_map.detach().cpu(), alpha=0.6, extent=(0,224,224,0),
interpolation='bilinear', cmap='magma');
if isinstance(item.boxes, str):
for box in literal_eval(item.boxes):
w_scale = ax.get_xlim()[1]/item.Columns
h_scale = ax.get_ylim()[0]/item.Rows
rect = patches.Rectangle((box['x']*w_scale, box['y']*h_scale),
box['width']*w_scale, box['height']*h_scale,
color='r', linewidth=1, fill=False)
ax.add_patch(rect)
plt.show()
The results are very confusing. The image's parts that are most important to the model are not related to the finding boxes, and sometimes are out of the lungs, or even out of the body at all. Maybe we can do better if we can turn the model attention to the lungs. We will return to it later.
Regarding the small amount of data we have from Atypical and Indeterminate class, the model poor results on these label is not a big surprise. We can use oversampling balancing method to balance the data before the training.
def balance_df(df, target_col, valid_col=None):
df = df.copy()
if valid_col is not None:
df_val = df.loc[df[valid_col]]
df = df.loc[~df[valid_col]]
target = df[target_col]
max_count = target.value_counts().max()
for cls in target.unique():
cls_df = df.loc[df[target_col] == cls]
while (df[target_col] == cls).sum() < max_count:
df = df.append(cls_df.sample(min(max_count - (df[target_col] == cls).sum(), cls_df.shape[0])))
if valid_col is not None:
df = df.append(df_val)
return df
btrain_df = balance_df(train_df, 'study_label', 'valid')
print('Samples counts on the training set:\n')
print(btrain_df[~btrain_df.valid].study_label.value_counts())
print('\n\n\nSamples counts on the validation set:\n')
print(btrain_df.loc[btrain_df.valid].study_label.value_counts())
Samples counts on the training set: Indeterminate Appearance 2357 Negative for Pneumonia 2357 Atypical Appearance 2357 Typical Appearance 2357 Name: study_label, dtype: int64 Samples counts on the validation set: Typical Appearance 650 Negative for Pneumonia 357 Indeterminate Appearance 254 Atypical Appearance 100 Name: study_label, dtype: int64
dlsb = ImageDataLoaders.from_df(btrain_df, path/'train', fn_col='image_fn', label_col='study_label', bs=128, seed=seed, valid_col='valid',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
learnb = cnn_learner(dls, resnet18, metrics=metrics, cbs=[MixUp(), SaveModelCallback(monitor='accuracy', fname='resnet18_balanced', reset_on_fit=True)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/balanced')
random_seed(seed)
learnb.fit_one_cycle(10, lr)
learnb.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.219090 | 1.335968 | 0.482733 | 00:25 |
| 1 | 1.970201 | 1.304144 | 0.464364 | 00:25 |
| 2 | 1.714511 | 1.233654 | 0.471712 | 00:25 |
| 3 | 1.516757 | 1.151954 | 0.504041 | 00:25 |
| 4 | 1.408087 | 1.168786 | 0.506245 | 00:25 |
| 5 | 1.339128 | 1.118309 | 0.554004 | 00:25 |
| 6 | 1.309822 | 1.145051 | 0.523145 | 00:25 |
| 7 | 1.288998 | 1.144312 | 0.524614 | 00:25 |
| 8 | 1.283112 | 1.134314 | 0.538575 | 00:25 |
| 9 | 1.279155 | 1.127064 | 0.542983 | 00:25 |
Better model found at epoch 0 with accuracy value: 0.48273327946662903. Better model found at epoch 3 with accuracy value: 0.5040411353111267. Better model found at epoch 4 with accuracy value: 0.5062454342842102. Better model found at epoch 5 with accuracy value: 0.554004430770874.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.306818 | 1.141441 | 0.530492 | 00:25 |
| 1 | 1.301914 | 1.100895 | 0.565026 | 00:25 |
| 2 | 1.287177 | 1.182743 | 0.505511 | 00:25 |
| 3 | 1.279284 | 1.103523 | 0.546657 | 00:25 |
| 4 | 1.258687 | 1.143587 | 0.510654 | 00:25 |
| 5 | 1.250123 | 1.109319 | 0.521675 | 00:25 |
| 6 | 1.236958 | 1.103620 | 0.536370 | 00:25 |
| 7 | 1.227438 | 1.102438 | 0.542248 | 00:25 |
| 8 | 1.218653 | 1.108317 | 0.531962 | 00:25 |
| 9 | 1.217123 | 1.118696 | 0.531227 | 00:25 |
Better model found at epoch 0 with accuracy value: 0.5304923057556152. Better model found at epoch 1 with accuracy value: 0.5650256872177124.
The accuracy is much lower comparing the former model. Let's look at the confusion matrix:
learnb.load('/content/gdrive/MyDrive/covid19-detection/project-models/balanced/models/resnet18_balanced')
<fastai.learner.Learner at 0x7f4672b06750>
intrepb = ClassificationInterpretation.from_learner(learnb)
intrepb.plot_confusion_matrix(True, figsize=(7, 7))
intrep.print_classification_report()
precision recall f1-score support
Atypical Appearance 0.17 0.14 0.15 100
Indeterminate Appearance 0.21 0.14 0.17 254
Negative for Pneumonia 0.54 0.85 0.66 357
Typical Appearance 0.75 0.64 0.69 650
accuracy 0.57 1361
macro avg 0.42 0.44 0.42 1361
weighted avg 0.55 0.57 0.55 1361
Now the model is much more accurate on atypical, but it comes on expence of the typical class. Seems that the decision between typical and atypical is too hard - when the dataset didn't contain much atipycals, the model could ignore them and be accurate on the typicals. But now that the dataset contains equal number of typicals and atypicals, the model accuracy drops.
Although the total accuarcy of the balanced model, we can use in the compietions it for scoring the atypical and indertminate classes. Since the evaluation method calculates the scores for each class independetly, we will use for each class the best model for that class.
learn.load('resnet18_balanced')
learn.export('resnet18_balanced.pkl')
/usr/local/lib/python3.7/dist-packages/fastai/learner.py:56: UserWarning: Saved filed doesn't contain an optimizer state.
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
Using this model to predict atypical and indeteminate classes give a score of 0.316 on the public LB. But if this model have better accuracy on the atypical and the indeteminate classes, How using it on this classes gave worse results? The answer is, the first model was better in estimating the probablity that image is atypical than the balanced model, but it give genrally more probaility for any image to be typical. But sorting the images by their atypical porbablity prediction, there are still more real atypical on the top of the list in the regular model than the balanced model.
The reason that the regular model is better than the balanced must be related to the oversampling in some way. Maybe the balanced model overfitted on the atypical and interminated duplicated images.
When examining the model results, we saw that the The model had a hard time recognizing small or assymetric findings. How can we help the model to determind them? We already know that these fingings found only in the lungs part of the image. If we could annotate the lungs on the images and tell the model where the lungs are, instead exploring the whole image after findings, the model will pay more attention to the important part of the image and it may improve the results.
In kaggle, we can find a this dataset which contains CXRs and lung masks. We can use it to train a lungs detetctor, and create annotations for our dataset.
Let's download the dataset from kaggle:
!kaggle datasets download -d nikhilpandey360/chest-xray-masks-and-labels
!unzip -o chest-xray-masks-and-labels.zip -d cxmal-ds/ 1>/dev/null
Downloading chest-xray-masks-and-labels.zip to /content 100% 9.56G/9.58G [01:54<00:00, 95.4MB/s] 100% 9.58G/9.58G [01:55<00:00, 88.9MB/s]
Some mask are missing. We have to filter out all the CXRs which the dataset do not contains masks label for them. Beside that, we will givethe masks equal names like the CXR files (the original names have multiple formats):
CXRs_path = Path('/content/cxmal-ds/Lung Segmentation/CXR_png/')
mask_path = Path('/content/cxmal-ds/Lung Segmentation/masks/')
CXRs_files = list(CXRs_path.glob('*.png'))
mask_files = list(mask_path.glob('*.png'))
import shutil
for fn in tqdm(mask_files):
if fn.stem.endswith('_mask'):
shutil.move(fn, fn.parent/(fn.stem[:-len('_mask')] + '.png'))
mask_files = list(mask_path.glob('*.png'))
mask_stems = [fn.stem for fn in mask_files]
CXRs_files = [fn for fn in CXRs_files if fn.stem in mask_stems]
In the original dataset the lungs mask value is 255; i.e, the mask files contains 255 in the lungs region and 0 elsewere. Fastai desires the segmentation label for one class segemtation task to be 1, so we have to fix the masks.
from joblib import Parallel, delayed
def format_mask(fn):
mask = PIL.Image.open(fn)
mask = np.array(mask)
mask = mask != 0
PIL.Image.fromarray(mask, mode='L').save(fn)
Parallel(n_jobs=-1)(delayed(format_mask)(fn) for fn in tqdm(mask_files));
Let's show the data:
i = random.randrange(len(CXRs_files))
plt.figure(figsize=(20,5))
cxr = plt.imread(CXRs_files[i])
mask = plt.imread(mask_files[i])
plt.subplot(1 , 3, 1)
plt.title('CXR')
plt.imshow(cxr)
plt.xticks([])
plt.yticks([])
plt.subplot(1 , 3, 2)
plt.title('Mask')
plt.imshow(mask)
plt.xticks([])
plt.yticks([])
plt.subplot(1 , 3, 3)
plt.title('Overlay')
plt.xticks([])
plt.yticks([])
plt.imshow(np.array([cxr, cxr, mask*255]).transpose((1,2,0)))
plt.show()
Using this dataset we will train a lungs detector:
def label_func(f): return mask_path/f.name
dls_lungs = SegmentationDataLoaders.from_label_func('/', CXRs_files,
label_func,
batch_size=32,
seed=seed,
codes=['background', 'lung'],
item_tfms=Resize(256, pad_mode=PadMode.Zeros),
bacth_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.4
) )
dls_lungs.show_batch()
lung_detector = unet_learner(dls_lungs, resnet18, metrics=[foreground_acc, Dice], cbs=[SaveModelCallback('dice')],
path='/content/gdrive/MyDrive/covid19-detection/project-models/lungs-detector/resnet18')
random_seed(seed)
lung_detector.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | foreground_acc | dice | time |
|---|---|---|---|---|---|
| 0 | 0.303297 | 0.164016 | 0.888558 | 0.913564 | 01:02 |
| 1 | 0.188742 | 0.090731 | 0.952972 | 0.947277 | 01:02 |
| 2 | 0.132844 | 0.065343 | 0.948718 | 0.956697 | 01:02 |
| 3 | 0.103328 | 0.058421 | 0.955897 | 0.960045 | 01:01 |
| 4 | 0.084565 | 0.057908 | 0.943713 | 0.960400 | 01:01 |
| 5 | 0.071709 | 0.056635 | 0.948708 | 0.961429 | 01:01 |
| 6 | 0.062960 | 0.057941 | 0.957891 | 0.960615 | 01:03 |
| 7 | 0.055589 | 0.060662 | 0.952709 | 0.962185 | 01:00 |
| 8 | 0.049958 | 0.060498 | 0.956056 | 0.961841 | 01:02 |
| 9 | 0.046046 | 0.061221 | 0.953470 | 0.962214 | 01:02 |
Better model found at epoch 0 with dice value: 0.9135644814803429. Better model found at epoch 1 with dice value: 0.9472769527630406. Better model found at epoch 2 with dice value: 0.9566971125950945. Better model found at epoch 3 with dice value: 0.9600451633801982. Better model found at epoch 4 with dice value: 0.9603999159352814. Better model found at epoch 5 with dice value: 0.9614294932710088. Better model found at epoch 7 with dice value: 0.9621849991371552. Better model found at epoch 9 with dice value: 0.9622136592373838.
lung_detector = unet_learner(dls_lungs, resnet18, metrics=[foreground_acc, Dice],
path='/content/gdrive/MyDrive/covid19-detection/project-models/lungs-detector/resnet18')
lung_detector.load('model')
<fastai.learner.Learner at 0x7f456c24d190>
lung_detector.export('resnet18_lung_detector')
Let's examine the model result on the main dataset:
def create_overlay(img, mask):
img = PIL.Image.open(img)
img = np.array(img)
mask = mask.numpy()
img = np.stack([img, img, mask*img.max()]).transpose((1,2,0))
return img
tdl = lung_detector.dls.test_dl(get_image_files('/content/jpeg-256/train'))
items = tdl.items[:tdl.bs]
b = tdl.one_batch()
imgs, preds, _, masks = lung_detector.get_preds(dl=[b], with_input=True, with_decoded=True)
show_images([ create_overlay(fn, m) for fn ,m in zip(items[:6], masks[:6])], 3, 2)
Excellent. With this model we will create a new dateset of lungs-annotated CXRs.
ann_cxr_path = Path('/content/ann_cxr')
ann_cxr_path.mkdir(exist_ok=True)
def save_annotated_image(fn, mask):
img = np.array(PIL.Image.open(fn))
mask *= 255
img = np.stack([img, img, mask]).transpose([1,2,0]).astype(np.uint8)
# print(img.dtype)
PIL.Image.fromarray(img).save(ann_cxr_path/fn.name)
files = get_image_files('/content/jpeg-256/train')
tdl = lung_detector.dls.test_dl(files)
for i, b in enumerate(tqdm(tdl)):
preds, _, masks = lung_detector.get_preds(dl=[b], with_decoded=True)
# [save_annotated_image(fn, img, mask) for fn, img, mask in tqdm(zip(files, images, masks), total=len(files))]
Parallel(n_jobs=-1)(delayed(save_annotated_image)(fn, mask) for fn, mask in zip(files[i*tdl.bs:(i+1)*tdl.bs], masks))
!zip -r lung_ann.zip /content/ann_cxr 1>/dev/null
!cp lung_ann.zip /content/gdrive/MyDrive/covid19-detection/
Now let's train a model using the lung annotated dataset:
dls = ImageDataLoaders.from_df(train_df, ann_cxr_path, fn_col='image_fn', label_col='study_label', bs=128, seed=seed, valid_col='valid',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
dls.show_batch()
learn = cnn_learner(dls, resnet18, metrics=metrics, cbs=[MixUp(), SaveModelCallback('accuracy', fname='resnet18_lungs_ann', reset_on_fit=False)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/')
random_seed(seed)
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.267499 | 1.435779 | 0.401176 | 00:15 |
| 1 | 2.001173 | 1.295355 | 0.487142 | 00:15 |
| 2 | 1.805714 | 1.129401 | 0.562087 | 00:15 |
| 3 | 1.619403 | 1.079148 | 0.587068 | 00:15 |
| 4 | 1.480217 | 1.062512 | 0.593681 | 00:15 |
| 5 | 1.383800 | 1.018543 | 0.609111 | 00:15 |
| 6 | 1.321221 | 1.020916 | 0.603233 | 00:15 |
| 7 | 1.280113 | 1.014218 | 0.603968 | 00:15 |
| 8 | 1.242893 | 1.012466 | 0.601029 | 00:15 |
| 9 | 1.222161 | 1.012224 | 0.599559 | 00:15 |
Better model found at epoch 0 with accuracy value: 0.4011756181716919. Better model found at epoch 1 with accuracy value: 0.48714181780815125. Better model found at epoch 2 with accuracy value: 0.5620867013931274. Better model found at epoch 3 with accuracy value: 0.5870683193206787. Better model found at epoch 4 with accuracy value: 0.5936810970306396. Better model found at epoch 5 with accuracy value: 0.609110951423645.
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.263959 | 1.010589 | 0.606907 | 00:15 |
| 1 | 1.249599 | 1.040149 | 0.591477 | 00:15 |
| 2 | 1.223314 | 0.994722 | 0.608376 | 00:15 |
| 3 | 1.198254 | 0.990495 | 0.616458 | 00:15 |
| 4 | 1.172536 | 0.996548 | 0.617193 | 00:15 |
| 5 | 1.137991 | 1.001128 | 0.617193 | 00:15 |
Better model found at epoch 3 with accuracy value: 0.6164584755897522. Better model found at epoch 4 with accuracy value: 0.6171932220458984.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.263959 | 1.010589 | 0.606907 | 00:15 |
| 1 | 1.249599 | 1.040149 | 0.591477 | 00:15 |
| 2 | 1.223314 | 0.994722 | 0.608376 | 00:15 |
| 3 | 1.198254 | 0.990495 | 0.616458 | 00:15 |
| 4 | 1.172536 | 0.996548 | 0.617193 | 00:15 |
| 5 | 1.137991 | 1.001128 | 0.617193 | 00:15 |
| 6 | 1.123813 | 0.997503 | 0.613519 | 00:15 |
| 7 | 1.117974 | 0.982794 | 0.621602 | 00:15 |
| 8 | 1.107139 | 0.980644 | 0.621602 | 00:15 |
| 9 | 1.109801 | 0.986673 | 0.622337 | 00:15 |
Better model found at epoch 7 with accuracy value: 0.6216017603874207. Better model found at epoch 9 with accuracy value: 0.6223365068435669.
learn.fit_one_cycle(20, lr)
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.111008 | 0.977138 | 0.621602 | 00:15 |
| 1 | 1.103754 | 0.972803 | 0.627480 | 00:15 |
| 2 | 1.103430 | 0.973667 | 0.621602 | 00:15 |
| 3 | 1.099101 | 0.994444 | 0.620132 | 00:15 |
| 4 | 1.090209 | 0.988612 | 0.619398 | 00:15 |
Better model found at epoch 1 with accuracy value: 0.6274797916412354.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.111008 | 0.977138 | 0.621602 | 00:15 |
| 1 | 1.103754 | 0.972803 | 0.627480 | 00:15 |
| 2 | 1.103430 | 0.973667 | 0.621602 | 00:15 |
| 3 | 1.099101 | 0.994444 | 0.620132 | 00:15 |
| 4 | 1.090209 | 0.988612 | 0.619398 | 00:15 |
| 5 | 1.088827 | 0.996453 | 0.613519 | 00:15 |
| 6 | 1.085216 | 0.960095 | 0.630419 | 00:15 |
| 7 | 1.078139 | 0.950651 | 0.628949 | 00:15 |
| 8 | 1.072358 | 0.962799 | 0.626745 | 00:15 |
| 9 | 1.068268 | 0.953659 | 0.628215 | 00:15 |
| 10 | 1.070310 | 0.963855 | 0.623806 | 00:15 |
| 11 | 1.062110 | 0.947065 | 0.634093 | 00:15 |
| 12 | 1.055256 | 0.949435 | 0.645114 | 00:15 |
| 13 | 1.046854 | 0.943637 | 0.632623 | 00:15 |
| 14 | 1.043869 | 0.942601 | 0.647318 | 00:15 |
| 15 | 1.038722 | 0.941413 | 0.645114 | 00:15 |
| 16 | 1.032956 | 0.941611 | 0.633358 | 00:15 |
| 17 | 1.036669 | 0.944473 | 0.634093 | 00:15 |
| 18 | 1.038098 | 0.942114 | 0.633358 | 00:15 |
| 19 | 1.036430 | 0.941454 | 0.635562 | 00:15 |
Better model found at epoch 6 with accuracy value: 0.6304188370704651. Better model found at epoch 11 with accuracy value: 0.6340925693511963. Better model found at epoch 12 with accuracy value: 0.6451138854026794. Better model found at epoch 14 with accuracy value: 0.6473181247711182.
The results are slightly better, but the lungs annotation didn't improve the model accuracy very much. I guess that the reason is that the distinction between the classes is the existence of finding in the lungs and their type, so during the trainig the model could gain the knowledge of the lungs region in the image and learn how to recognize the lungs by itself, when learning the difference between the classes. The noise in this case is relatively small so the model do not gain much information from the lungs annotation.
Anyway, let's export the last model.
learn.load('resnet18_lungs_ann')
learn.export('resnet18_lungs_ann.pkl')
On kaggle public leaderboard, segemting the lungs with the lungs detector then using the last model achive idencitcal results to the regular model - 0.322.
Let's try to use again Grad Cam to see if this model pay more attention to the findings.
class Hook():
def __init__(self, m):
self.hook = m.register_forward_hook(self.hook_func)
def hook_func(self, m, i, o): self.stored = o.detach().clone()
def __enter__(self, *args): return self
def __exit__(self, *args): self.hook.remove()
class HookBwd():
def __init__(self, m):
self.hook = m.register_backward_hook(self.hook_func)
def hook_func(self, m, gi, go): self.stored = go[0].detach().clone()
def __enter__(self, *args): return self
def __exit__(self, *args): self.hook.remove()
for cls in ['Atypical Appearance', 'Indeterminate Appearance', 'Negative for Pneumonia', 'Typical Appearance']:
samples = train_df.loc[train_df.valid & (train_df.study_label==cls)].sample(3, random_state=123)
figure ,axes = plt.subplots(1, 6, figsize=(30,5))
axes = axes.reshape((3,2))
figure.suptitle(cls)
for (idx, item), (org_ax, ax) in zip(samples.iterrows(), axes):
img = PILImage.create(ann_cxr_path/item.image_fn)
org_ax.imshow(img)
org_ax.set_title('Original Image')
org_ax.set_axis_off()
img = PILImage.create(path/'train'/item.image_fn)
x, = first(dls.test_dl([img]))
cls_idx = dls.vocab.o2i[cls]
learn.model.to('cuda')
with HookBwd(learn.model[0]) as hookg:
with Hook(learn.model[0]) as hook:
output = learn.model.eval()(x.cuda())
act = hook.stored
predicted_class = dls.vocab[output.detach().cpu().numpy().argmax()]
output[0,cls_idx].backward()
grad = hookg.stored
w = grad[0].mean(dim=[1,2], keepdim=True)
cam_map = (w * act[0]).sum(0)
x_dec = TensorImage(dls.train.decode((x,))[0][0])
x_dec.show(ctx=ax)
color = 'b' if cls == predicted_class else 'r'
ax.set_title('Predicted As: '+ predicted_class, color=color)
ax.imshow(cam_map.detach().cpu(), alpha=0.6, extent=(0,224,224,0),
interpolation='bilinear', cmap='magma');
if isinstance(item.boxes, str):
for box in literal_eval(item.boxes):
w_scale = ax.get_xlim()[1]/item.Columns
h_scale = ax.get_ylim()[0]/item.Rows
rect = patches.Rectangle((box['x']*w_scale, box['y']*h_scale),
box['width']*w_scale, box['height']*h_scale,
color='r', linewidth=1, fill=False)
ax.add_patch(rect)
plt.show()
The model is still looking out of the lungs area - sometimes at all the image except for the lung area. I cannot explain these results.
We saw earlier that the evaluation method calculates the average precision score for each class independently. This led me to try to train a different model for each of the classes. By training the binary models, we reduce the problem for each class to a simpler problem in which we may get better accuracy on each class indepentently
To evaluate the binary models we will add precision to the metrics, since our main interste here is to improve the presicion of the binary model w.r.t the selected class.
Let's try to train a binary model for the most confused class - atypical.
atypical_dls = ImageDataLoaders.from_df(train_df, path/'train', fn_col='image_fn', label_col='Atypical Appearance', bs=128, seed=seed, valid_col='valid',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
learn = cnn_learner(atypical_dls, resnet18, metrics=[accuracy, Precision()], cbs=[MixUp(), SaveModelCallback('accuracy', fname='resnet18_atypical', reset_on_fit=False)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/')
learn.load('resnet18_atypical')
/usr/local/lib/python3.7/dist-packages/fastai/learner.py:56: UserWarning: Saved filed doesn't contain an optimizer state.
elif with_opt: warn("Saved filed doesn't contain an optimizer state.")
<fastai.learner.Learner at 0x7fc732ac1790>
atypical_dls = ImageDataLoaders.from_df(train_df, path/'train', fn_col='image_fn', label_col='Atypical Appearance', bs=128, seed=seed, valid_col='valid',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
learn = cnn_learner(atypical_dls, resnet18, metrics=[accuracy, Precision()], cbs=[MixUp(), SaveModelCallback('accuracy', fname='resnet18_atypical', reset_on_fit=False)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/')
random_seed(seed)
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | precision_score | time |
|---|---|---|---|---|---|
| 0 | 1.268427 | 1.512867 | 0.255694 | 0.080037 | 00:15 |
| 1 | 1.051898 | 0.716580 | 0.584864 | 0.091388 | 00:15 |
| 2 | 0.819488 | 0.417166 | 0.846436 | 0.083969 | 00:15 |
| 3 | 0.638893 | 0.329314 | 0.900073 | 0.050000 | 00:15 |
| 4 | 0.510268 | 0.309125 | 0.916973 | 0.000000 | 00:15 |
| 5 | 0.433980 | 0.283711 | 0.926525 | 0.000000 | 00:15 |
| 6 | 0.395441 | 0.273009 | 0.926525 | 0.000000 | 00:15 |
| 7 | 0.372230 | 0.277723 | 0.926525 | 0.000000 | 00:15 |
| 8 | 0.354502 | 0.272780 | 0.926525 | 0.000000 | 00:15 |
| 9 | 0.344974 | 0.272024 | 0.926525 | 0.000000 | 00:15 |
Better model found at epoch 0 with accuracy value: 0.25569432973861694. Better model found at epoch 1 with accuracy value: 0.58486407995224. Better model found at epoch 2 with accuracy value: 0.8464364409446716. Better model found at epoch 3 with accuracy value: 0.9000734686851501. Better model found at epoch 4 with accuracy value: 0.916972815990448. Better model found at epoch 5 with accuracy value: 0.9265246391296387.
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result))
The model found that the better it can do it to predict all the classes as non-atypical... 0.92 accuracy, 0 precision...
We have to change the loss function to focal loss. In focal loss the most hard classes got more weight in the loss function, such that the model try harder to improve it's precision on those classes. We will set the gamma parameter to some high value, because the atypical seems to be very hard to detect.
learn = cnn_learner(atypical_dls, resnet18, metrics=[accuracy, Precision(), Recall()], cbs=[SaveModelCallback('precision_score', fname='resnet18_atypical', reset_on_fit=False)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/', loss_func=FocalLossFlat(gamma=5))
random_seed(seed)
learn.fit_one_cycle(10, lr)
| epoch | train_loss | valid_loss | accuracy | precision_score | recall_score | time |
|---|---|---|---|---|---|---|
| 0 | 0.685791 | 0.104075 | 0.744306 | 0.063380 | 0.180000 | 00:15 |
| 1 | 0.508262 | 0.088695 | 0.706833 | 0.074074 | 0.260000 | 00:15 |
| 2 | 0.338057 | 0.052572 | 0.879500 | 0.100000 | 0.080000 | 00:15 |
| 3 | 0.221858 | 0.030815 | 0.797208 | 0.100000 | 0.220000 | 00:15 |
| 4 | 0.152842 | 0.030815 | 0.889787 | 0.109375 | 0.070000 | 00:15 |
| 5 | 0.104253 | 0.025201 | 0.912564 | 0.086957 | 0.020000 | 00:15 |
| 6 | 0.077236 | 0.021808 | 0.919177 | 0.083333 | 0.010000 | 00:15 |
| 7 | 0.058763 | 0.024777 | 0.922851 | 0.000000 | 0.000000 | 00:15 |
| 8 | 0.049508 | 0.020521 | 0.922851 | 0.142857 | 0.010000 | 00:15 |
| 9 | 0.043574 | 0.020294 | 0.921381 | 0.111111 | 0.010000 | 00:15 |
Better model found at epoch 0 with precision_score value: 0.06338028169014084. Better model found at epoch 1 with precision_score value: 0.07407407407407407. Better model found at epoch 2 with precision_score value: 0.1. Better model found at epoch 4 with precision_score value: 0.109375. Better model found at epoch 8 with precision_score value: 0.14285714285714285.
intrep = ClassificationInterpretation.from_learner(learn)
intrep.print_classification_report()
precision recall f1-score support
0 0.93 1.00 0.96 1261
1 0.14 0.01 0.02 100
accuracy 0.92 1361
macro avg 0.53 0.50 0.49 1361
weighted avg 0.87 0.92 0.89 1361
The precision and recall of class 1 (atypical class) are much worse than the four-class model. Although didn't represented here, I got similar results for all the classes. These results surprised me. The task of the binary model is more exact than the four-class model and I expected the model to specialize to its binary task and to be more accurate on this class only. It turns out that additional data that the regular model provided with about how to classify the non-atypical cases give the model more knowledge on the problem and help it improve its accuracy on the atypical cases too.
!unzip /content/gdrive/MyDrive/covid19-detection/lung_ann.zip -d /content/lung_ann
Until now we compared several models and training method, trained always on the same training set and evaluated on the same validation set. Now when we have two winning model and a training scheme, we want to use the whole data we have for training. To do so we will use 5-Folds method, in which we split our data set to 5 distinct folds, and train 5 models - each one is trained on 4 different folds while the remaining set will serve us as a validation set in order to save the best model during the training. At inference time, we will average the predictions of all the five models as the final prediction.
By doing so, we are not just taking advantage of all the data we have, but add regularization to to model by averaging several models.
from sklearn.model_selection import StratifiedKFold
nfolds = 5
skf = StratifiedKFold(nfolds, shuffle=True, random_state=seed)
for i, (train_idx, val_idx) in enumerate(skf.split(train_df, train_df.study_label)):
train_df.loc[val_idx , 'val_fold'] = i
for fold in range(nfolds):
print(f'Fold {fold}')
train_df['current_val_fold'] = train_df.val_fold == fold
dls = ImageDataLoaders.from_df(train_df, path/'train', fn_col='image_fn', label_col='study_label', bs=128, seed=seed, valid_col='current_val_fold',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
learn = cnn_learner(dls, resnet18, metrics=metrics, cbs=[MixUp(), SaveModelCallback('accuracy', fname=f'resnet18_fold_{fold}', reset_on_fit=False)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/')
random_seed(seed)
print('Training...')
learn.fit_one_cycle(10, lr)
learn.fit_one_cycle(10, lr)
learn.fit_one_cycle(20, lr)
learn.load(f'resnet18_fold_{fold}') #load best model
learn.export(f'resnet18_fold_{fold}.pkl')
print('Done.')
Fold 0 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.234610 | 1.492498 | 0.393844 | 00:14 |
| 1 | 1.999853 | 1.313826 | 0.516180 | 00:14 |
| 2 | 1.817956 | 1.222785 | 0.550118 | 00:14 |
| 3 | 1.631885 | 1.113771 | 0.585635 | 00:14 |
| 4 | 1.476509 | 1.101243 | 0.591949 | 00:14 |
| 5 | 1.385140 | 1.059921 | 0.587214 | 00:14 |
| 6 | 1.309986 | 1.047763 | 0.598264 | 00:14 |
| 7 | 1.267001 | 1.038614 | 0.595896 | 00:14 |
| 8 | 1.228759 | 1.040258 | 0.599053 | 00:14 |
| 9 | 1.215098 | 1.039436 | 0.600631 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.39384374022483826. Better model found at epoch 1 with accuracy value: 0.5161799788475037. Better model found at epoch 2 with accuracy value: 0.5501183867454529. Better model found at epoch 3 with accuracy value: 0.5856353640556335. Better model found at epoch 4 with accuracy value: 0.591949462890625. Better model found at epoch 6 with accuracy value: 0.5982636213302612. Better model found at epoch 8 with accuracy value: 0.599052906036377. Better model found at epoch 9 with accuracy value: 0.6006314158439636.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.194779 | 1.034400 | 0.597474 | 00:14 |
| 1 | 1.199681 | 1.043077 | 0.607735 | 00:14 |
| 2 | 1.187455 | 1.042423 | 0.598264 | 00:14 |
| 3 | 1.160036 | 1.016269 | 0.615627 | 00:14 |
| 4 | 1.143094 | 1.020813 | 0.605367 | 00:14 |
| 5 | 1.126428 | 1.015589 | 0.606946 | 00:14 |
| 6 | 1.111182 | 1.003368 | 0.610892 | 00:14 |
| 7 | 1.099076 | 0.992690 | 0.610892 | 00:14 |
| 8 | 1.091586 | 0.992776 | 0.613260 | 00:14 |
| 9 | 1.090414 | 0.992746 | 0.613260 | 00:14 |
Better model found at epoch 1 with accuracy value: 0.6077347993850708. Better model found at epoch 3 with accuracy value: 0.6156274676322937.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.113723 | 1.010798 | 0.614838 | 00:14 |
| 1 | 1.117286 | 1.001969 | 0.614838 | 00:14 |
| 2 | 1.116084 | 1.006922 | 0.611681 | 00:14 |
| 3 | 1.106738 | 1.000605 | 0.602999 | 00:14 |
| 4 | 1.094404 | 0.995711 | 0.605367 | 00:14 |
| 5 | 1.091563 | 0.999526 | 0.607735 | 00:14 |
| 6 | 1.085855 | 1.021288 | 0.606946 | 00:14 |
| 7 | 1.077358 | 1.003165 | 0.609313 | 00:14 |
| 8 | 1.075005 | 1.004632 | 0.605367 | 00:14 |
| 9 | 1.070916 | 0.995446 | 0.612470 | 00:14 |
| 10 | 1.069739 | 0.994115 | 0.606156 | 00:14 |
| 11 | 1.054898 | 0.982770 | 0.614838 | 00:14 |
| 12 | 1.053914 | 0.977234 | 0.613260 | 00:14 |
| 13 | 1.054965 | 0.981484 | 0.614838 | 00:14 |
| 14 | 1.044761 | 0.984837 | 0.613260 | 00:14 |
| 15 | 1.048095 | 0.971565 | 0.616417 | 00:14 |
| 16 | 1.043121 | 0.969515 | 0.614049 | 00:14 |
| 17 | 1.038041 | 0.971818 | 0.614049 | 00:14 |
| 18 | 1.033432 | 0.972593 | 0.612470 | 00:14 |
| 19 | 1.035268 | 0.971517 | 0.612470 | 00:14 |
Better model found at epoch 15 with accuracy value: 0.6164167523384094. Done. Fold 1 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.205660 | 1.665430 | 0.340963 | 00:14 |
| 1 | 1.959726 | 1.369153 | 0.520916 | 00:14 |
| 2 | 1.765720 | 1.271408 | 0.505920 | 00:14 |
| 3 | 1.597780 | 1.189326 | 0.567482 | 00:14 |
| 4 | 1.480354 | 1.095861 | 0.573007 | 00:14 |
| 5 | 1.385397 | 1.099242 | 0.585635 | 00:14 |
| 6 | 1.307071 | 1.074525 | 0.597474 | 00:14 |
| 7 | 1.261139 | 1.072500 | 0.609313 | 00:14 |
| 8 | 1.232931 | 1.061616 | 0.610892 | 00:14 |
| 9 | 1.214369 | 1.060513 | 0.611681 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.3409629166126251. Better model found at epoch 1 with accuracy value: 0.5209155678749084. Better model found at epoch 3 with accuracy value: 0.5674822330474854. Better model found at epoch 4 with accuracy value: 0.5730071067810059. Better model found at epoch 5 with accuracy value: 0.5856353640556335. Better model found at epoch 6 with accuracy value: 0.5974743366241455. Better model found at epoch 7 with accuracy value: 0.6093133091926575. Better model found at epoch 8 with accuracy value: 0.6108918786048889. Better model found at epoch 9 with accuracy value: 0.6116811633110046.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.210015 | 1.055136 | 0.611681 | 00:14 |
| 1 | 1.198159 | 1.058567 | 0.604578 | 00:14 |
| 2 | 1.190304 | 1.025291 | 0.607735 | 00:14 |
| 3 | 1.162313 | 1.026744 | 0.602999 | 00:14 |
| 4 | 1.143989 | 1.027231 | 0.610892 | 00:14 |
| 5 | 1.129938 | 1.030792 | 0.605367 | 00:14 |
| 6 | 1.112807 | 1.013807 | 0.612470 | 00:14 |
| 7 | 1.102528 | 1.019167 | 0.616417 | 00:14 |
| 8 | 1.094865 | 1.012341 | 0.620363 | 00:14 |
| 9 | 1.088419 | 1.012986 | 0.618785 | 00:14 |
Better model found at epoch 6 with accuracy value: 0.6124703884124756. Better model found at epoch 7 with accuracy value: 0.6164167523384094. Better model found at epoch 8 with accuracy value: 0.6203630566596985.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.074637 | 1.018257 | 0.617995 | 00:14 |
| 1 | 1.075830 | 1.007031 | 0.621152 | 00:14 |
| 2 | 1.083995 | 1.018080 | 0.611681 | 00:14 |
| 3 | 1.089273 | 1.006284 | 0.625888 | 00:14 |
| 4 | 1.082401 | 0.990304 | 0.625888 | 00:14 |
| 5 | 1.078249 | 0.995740 | 0.630624 | 00:14 |
| 6 | 1.074810 | 0.999103 | 0.632991 | 00:14 |
| 7 | 1.071434 | 1.030226 | 0.621152 | 00:14 |
| 8 | 1.063403 | 1.011322 | 0.628256 | 00:14 |
| 9 | 1.060281 | 1.003780 | 0.629834 | 00:14 |
| 10 | 1.056962 | 0.995367 | 0.629045 | 00:14 |
| 11 | 1.052032 | 0.980740 | 0.630624 | 00:14 |
| 12 | 1.048978 | 1.006859 | 0.617206 | 00:14 |
| 13 | 1.039295 | 0.996560 | 0.630624 | 00:14 |
| 14 | 1.039755 | 1.001216 | 0.632991 | 00:14 |
| 15 | 1.035918 | 1.000442 | 0.625888 | 00:14 |
| 16 | 1.040888 | 0.986756 | 0.629834 | 00:14 |
| 17 | 1.039023 | 0.986747 | 0.635359 | 00:14 |
| 18 | 1.043181 | 0.985517 | 0.635359 | 00:14 |
| 19 | 1.034975 | 0.984051 | 0.634570 | 00:14 |
Better model found at epoch 1 with accuracy value: 0.6211523413658142. Better model found at epoch 3 with accuracy value: 0.625887930393219. Better model found at epoch 5 with accuracy value: 0.6306235194206238. Better model found at epoch 6 with accuracy value: 0.6329913139343262. Better model found at epoch 17 with accuracy value: 0.6353591084480286. Done. Fold 2 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.220901 | 1.729824 | 0.322021 | 00:14 |
| 1 | 2.002295 | 1.309641 | 0.505130 | 00:14 |
| 2 | 1.797344 | 1.204352 | 0.530387 | 00:14 |
| 3 | 1.624646 | 1.148906 | 0.557222 | 00:14 |
| 4 | 1.478006 | 1.160477 | 0.544594 | 00:15 |
| 5 | 1.372335 | 1.044647 | 0.600631 | 00:14 |
| 6 | 1.317967 | 1.042510 | 0.614049 | 00:15 |
| 7 | 1.274113 | 1.020103 | 0.611681 | 00:15 |
| 8 | 1.248244 | 1.017023 | 0.610892 | 00:14 |
| 9 | 1.229082 | 1.013723 | 0.610892 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.3220205307006836. Better model found at epoch 1 with accuracy value: 0.5051302313804626. Better model found at epoch 2 with accuracy value: 0.530386745929718. Better model found at epoch 3 with accuracy value: 0.5572217702865601. Better model found at epoch 5 with accuracy value: 0.6006314158439636. Better model found at epoch 6 with accuracy value: 0.614048957824707.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.241012 | 1.024024 | 0.614049 | 00:14 |
| 1 | 1.222664 | 1.027017 | 0.610103 | 00:14 |
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.241012 | 1.024024 | 0.614049 | 00:14 |
| 1 | 1.222664 | 1.027017 | 0.610103 | 00:14 |
| 2 | 1.204307 | 1.045847 | 0.611681 | 00:14 |
| 3 | 1.178959 | 1.019698 | 0.606946 | 00:14 |
| 4 | 1.153331 | 1.024178 | 0.606156 | 00:14 |
| 5 | 1.133720 | 0.989195 | 0.622731 | 00:14 |
| 6 | 1.115040 | 0.978886 | 0.624309 | 00:14 |
| 7 | 1.097917 | 0.983074 | 0.615627 | 00:14 |
| 8 | 1.096667 | 0.985469 | 0.614049 | 00:14 |
| 9 | 1.089024 | 0.984056 | 0.614049 | 00:14 |
Better model found at epoch 5 with accuracy value: 0.6227308511734009. Better model found at epoch 6 with accuracy value: 0.6243094205856323.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.099421 | 0.973622 | 0.623520 | 00:14 |
| 1 | 1.100617 | 0.981520 | 0.619574 | 00:14 |
| 2 | 1.099980 | 0.972365 | 0.629834 | 00:14 |
| 3 | 1.097065 | 1.004939 | 0.610892 | 00:14 |
| 4 | 1.089672 | 1.012287 | 0.613260 | 00:14 |
| 5 | 1.085104 | 0.984831 | 0.623520 | 00:14 |
| 6 | 1.083457 | 0.987793 | 0.613260 | 00:14 |
| 7 | 1.072513 | 1.009532 | 0.607735 | 00:14 |
| 8 | 1.068079 | 0.992402 | 0.614049 | 00:14 |
| 9 | 1.059108 | 0.993120 | 0.610892 | 00:14 |
| 10 | 1.058179 | 0.988686 | 0.610103 | 00:14 |
| 11 | 1.057319 | 0.978314 | 0.613260 | 00:14 |
| 12 | 1.055715 | 0.979874 | 0.610103 | 00:14 |
| 13 | 1.045802 | 0.977466 | 0.614049 | 00:14 |
| 14 | 1.042296 | 0.979040 | 0.614049 | 00:14 |
| 15 | 1.036711 | 0.981705 | 0.610892 | 00:14 |
| 16 | 1.036412 | 0.973306 | 0.610103 | 00:14 |
| 17 | 1.034920 | 0.970677 | 0.612470 | 00:14 |
| 18 | 1.036899 | 0.974866 | 0.610892 | 00:14 |
| 19 | 1.038937 | 0.975204 | 0.610103 | 00:14 |
Better model found at epoch 2 with accuracy value: 0.6298342347145081. Done. Fold 3 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.202187 | 1.708136 | 0.307814 | 00:14 |
| 1 | 2.010480 | 1.362801 | 0.502762 | 00:14 |
| 2 | 1.794490 | 1.219032 | 0.542226 | 00:14 |
| 3 | 1.625865 | 1.128798 | 0.572218 | 00:14 |
| 4 | 1.487641 | 1.081557 | 0.575375 | 00:14 |
| 5 | 1.384580 | 1.050052 | 0.593528 | 00:14 |
| 6 | 1.309013 | 1.048114 | 0.602999 | 00:14 |
| 7 | 1.260476 | 1.031599 | 0.609313 | 00:14 |
| 8 | 1.237823 | 1.031589 | 0.614049 | 00:14 |
| 9 | 1.217913 | 1.028020 | 0.611681 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.30781373381614685. Better model found at epoch 1 with accuracy value: 0.5027624368667603. Better model found at epoch 2 with accuracy value: 0.54222571849823. Better model found at epoch 3 with accuracy value: 0.5722178220748901. Better model found at epoch 4 with accuracy value: 0.5753749012947083. Better model found at epoch 5 with accuracy value: 0.5935280323028564. Better model found at epoch 6 with accuracy value: 0.602999210357666. Better model found at epoch 7 with accuracy value: 0.6093133091926575. Better model found at epoch 8 with accuracy value: 0.614048957824707.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.202927 | 1.020557 | 0.613260 | 00:14 |
| 1 | 1.195205 | 1.043596 | 0.604578 | 00:14 |
| 2 | 1.185044 | 1.044132 | 0.592739 | 00:14 |
| 3 | 1.159005 | 1.014418 | 0.610103 | 00:14 |
| 4 | 1.141352 | 1.010734 | 0.618785 | 00:14 |
| 5 | 1.133509 | 1.001663 | 0.617206 | 00:14 |
| 6 | 1.113134 | 0.987656 | 0.622731 | 00:14 |
| 7 | 1.098867 | 0.985414 | 0.619574 | 00:14 |
| 8 | 1.094039 | 0.981885 | 0.625099 | 00:14 |
| 9 | 1.092553 | 0.982818 | 0.624309 | 00:14 |
Better model found at epoch 4 with accuracy value: 0.6187845468521118. Better model found at epoch 6 with accuracy value: 0.6227308511734009. Better model found at epoch 8 with accuracy value: 0.6250986456871033.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.088770 | 0.983164 | 0.621942 | 00:14 |
| 1 | 1.080046 | 0.981290 | 0.621152 | 00:14 |
| 2 | 1.090278 | 0.995549 | 0.614838 | 00:14 |
| 3 | 1.093406 | 1.004714 | 0.613260 | 00:14 |
| 4 | 1.091238 | 1.001562 | 0.610892 | 00:14 |
| 5 | 1.083788 | 0.986191 | 0.625888 | 00:14 |
| 6 | 1.077744 | 0.977527 | 0.619574 | 00:14 |
| 7 | 1.074677 | 0.994902 | 0.612470 | 00:14 |
| 8 | 1.073335 | 0.982139 | 0.623520 | 00:14 |
| 9 | 1.069544 | 0.973899 | 0.620363 | 00:14 |
| 10 | 1.069606 | 0.969809 | 0.620363 | 00:14 |
| 11 | 1.059778 | 0.961849 | 0.627466 | 00:14 |
| 12 | 1.055826 | 0.969770 | 0.622731 | 00:14 |
| 13 | 1.043280 | 0.972567 | 0.620363 | 00:14 |
| 14 | 1.044094 | 0.961624 | 0.625888 | 00:14 |
| 15 | 1.040175 | 0.954382 | 0.632991 | 00:14 |
| 16 | 1.035493 | 0.953654 | 0.629045 | 00:14 |
| 17 | 1.034546 | 0.955404 | 0.631413 | 00:14 |
| 18 | 1.035051 | 0.954753 | 0.631413 | 00:14 |
| 19 | 1.032901 | 0.956645 | 0.632202 | 00:14 |
Better model found at epoch 5 with accuracy value: 0.625887930393219. Better model found at epoch 11 with accuracy value: 0.6274664402008057. Better model found at epoch 15 with accuracy value: 0.6329913139343262. Done. Fold 4 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.246577 | 1.757234 | 0.329384 | 00:14 |
| 1 | 1.985038 | 1.379500 | 0.499210 | 00:14 |
| 2 | 1.775595 | 1.248947 | 0.541864 | 00:14 |
| 3 | 1.610152 | 1.133465 | 0.575829 | 00:14 |
| 4 | 1.478290 | 1.107720 | 0.593997 | 00:14 |
| 5 | 1.382506 | 1.078136 | 0.607425 | 00:14 |
| 6 | 1.317817 | 1.047092 | 0.601106 | 00:14 |
| 7 | 1.272947 | 1.037033 | 0.612954 | 00:14 |
| 8 | 1.244536 | 1.030353 | 0.614534 | 00:14 |
| 9 | 1.213058 | 1.030292 | 0.608215 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.32938387989997864. Better model found at epoch 1 with accuracy value: 0.4992101192474365. Better model found at epoch 2 with accuracy value: 0.5418641567230225. Better model found at epoch 3 with accuracy value: 0.5758293867111206. Better model found at epoch 4 with accuracy value: 0.5939968228340149. Better model found at epoch 5 with accuracy value: 0.6074249744415283. Better model found at epoch 7 with accuracy value: 0.6129541993141174. Better model found at epoch 8 with accuracy value: 0.6145339608192444.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.220735 | 1.019733 | 0.617694 | 00:14 |
| 1 | 1.207228 | 1.032834 | 0.618483 | 00:14 |
| 2 | 1.186590 | 1.021601 | 0.616904 | 00:14 |
| 3 | 1.166744 | 0.988091 | 0.631122 | 00:14 |
| 4 | 1.149615 | 1.008999 | 0.620063 | 00:14 |
| 5 | 1.126041 | 0.987518 | 0.628752 | 00:14 |
| 6 | 1.111983 | 0.978616 | 0.624013 | 00:14 |
| 7 | 1.100897 | 0.973983 | 0.632701 | 00:14 |
| 8 | 1.095111 | 0.973408 | 0.635071 | 00:14 |
| 9 | 1.094721 | 0.974186 | 0.632701 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.6176935434341431. Better model found at epoch 1 with accuracy value: 0.6184834241867065. Better model found at epoch 3 with accuracy value: 0.6311216354370117. Better model found at epoch 7 with accuracy value: 0.6327013969421387. Better model found at epoch 8 with accuracy value: 0.6350710988044739.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.079432 | 0.973780 | 0.635071 | 00:14 |
| 1 | 1.075992 | 0.967503 | 0.635861 | 00:14 |
| 2 | 1.081582 | 0.969732 | 0.634281 | 00:14 |
| 3 | 1.083442 | 0.966936 | 0.634281 | 00:14 |
| 4 | 1.083476 | 0.972557 | 0.626382 | 00:14 |
| 5 | 1.079381 | 1.001693 | 0.614534 | 00:14 |
| 6 | 1.084411 | 1.004687 | 0.612954 | 00:14 |
| 7 | 1.074714 | 0.953614 | 0.639810 | 00:14 |
| 8 | 1.068605 | 0.960483 | 0.634281 | 00:14 |
| 9 | 1.065629 | 0.958600 | 0.640600 | 00:14 |
| 10 | 1.064224 | 0.963929 | 0.627962 | 00:14 |
| 11 | 1.058093 | 0.958223 | 0.633491 | 00:14 |
| 12 | 1.055392 | 0.955778 | 0.630332 | 00:14 |
| 13 | 1.052739 | 0.942732 | 0.638231 | 00:14 |
| 14 | 1.048887 | 0.930998 | 0.642180 | 00:14 |
| 15 | 1.042231 | 0.935013 | 0.637441 | 00:14 |
| 16 | 1.039213 | 0.935692 | 0.639810 | 00:14 |
| 17 | 1.035503 | 0.937022 | 0.639021 | 00:14 |
| 18 | 1.037153 | 0.934190 | 0.639810 | 00:14 |
| 19 | 1.031371 | 0.936503 | 0.636651 | 00:14 |
Better model found at epoch 1 with accuracy value: 0.6358609795570374. Better model found at epoch 7 with accuracy value: 0.6398104429244995. Better model found at epoch 9 with accuracy value: 0.640600323677063. Better model found at epoch 14 with accuracy value: 0.6421800851821899. Done.
for fold in range(nfolds):
print(f'Fold {fold}')
train_df['current_val_fold'] = train_df.val_fold == fold
dls = ImageDataLoaders.from_df(train_df, path/'train', fn_col='image_fn', label_col='study_label', bs=64, seed=seed, valid_col='current_val_fold',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
learn = cnn_learner(dls, resnet34, metrics=metrics, cbs=[MixUp(), SaveModelCallback('accuracy', fname=f'resnet34_fold_{fold}', reset_on_fit=False)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/')
random_seed(seed)
print('Training...')
learn.fit_one_cycle(10, lr)
learn.fit_one_cycle(10, lr)
learn.fit_one_cycle(20, lr)
learn.load(f'resnet34_fold_{fold}') # load best model
learn.export(f'resnet34_fold_{fold}.pkl')
print('Done.')
Fold 0 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.198955 | 1.455005 | 0.468824 | 00:25 |
| 1 | 1.898526 | 1.375287 | 0.528808 | 00:25 |
| 2 | 1.605244 | 1.203083 | 0.553275 | 00:25 |
| 3 | 1.408272 | 1.134905 | 0.558011 | 00:25 |
| 4 | 1.272342 | 1.067355 | 0.577743 | 00:25 |
| 5 | 1.198823 | 1.045238 | 0.596685 | 00:25 |
| 6 | 1.163339 | 1.041618 | 0.606156 | 00:25 |
| 7 | 1.138712 | 1.020982 | 0.602999 | 00:25 |
| 8 | 1.132937 | 1.030093 | 0.600631 | 00:25 |
| 9 | 1.121814 | 1.027627 | 0.595896 | 00:25 |
Better model found at epoch 0 with accuracy value: 0.46882399916648865. Better model found at epoch 1 with accuracy value: 0.5288082361221313. Better model found at epoch 2 with accuracy value: 0.553275465965271. Better model found at epoch 3 with accuracy value: 0.5580110549926758. Better model found at epoch 4 with accuracy value: 0.5777426958084106. Better model found at epoch 5 with accuracy value: 0.5966851115226746. Better model found at epoch 6 with accuracy value: 0.6061562895774841.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.122382 | 1.045630 | 0.600631 | 00:25 |
| 1 | 1.134466 | 1.066402 | 0.598264 | 00:25 |
| 2 | 1.123392 | 1.036959 | 0.612470 | 00:25 |
| 3 | 1.116510 | 1.017898 | 0.606156 | 00:25 |
| 4 | 1.102789 | 1.008266 | 0.602210 | 00:25 |
| 5 | 1.089351 | 1.003396 | 0.599842 | 00:25 |
| 6 | 1.070792 | 0.991155 | 0.618785 | 00:25 |
| 7 | 1.064025 | 0.979076 | 0.620363 | 00:25 |
| 8 | 1.057279 | 0.983694 | 0.621152 | 00:25 |
| 9 | 1.059520 | 0.985880 | 0.617206 | 00:25 |
Better model found at epoch 2 with accuracy value: 0.6124703884124756. Better model found at epoch 6 with accuracy value: 0.6187845468521118. Better model found at epoch 7 with accuracy value: 0.6203630566596985. Better model found at epoch 8 with accuracy value: 0.6211523413658142.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.068274 | 0.976097 | 0.620363 | 00:25 |
| 1 | 1.056215 | 0.982830 | 0.617206 | 00:25 |
| 2 | 1.053725 | 0.982623 | 0.621152 | 00:25 |
| 3 | 1.062020 | 0.985856 | 0.620363 | 00:25 |
| 4 | 1.058906 | 0.985248 | 0.608524 | 00:25 |
| 5 | 1.060967 | 1.010143 | 0.610892 | 00:25 |
| 6 | 1.063434 | 1.006849 | 0.607735 | 00:25 |
| 7 | 1.062042 | 0.981906 | 0.619574 | 00:25 |
| 8 | 1.042451 | 0.988982 | 0.614049 | 00:25 |
| 9 | 1.045743 | 0.987031 | 0.610892 | 00:25 |
| 10 | 1.042289 | 0.977732 | 0.609313 | 00:25 |
| 11 | 1.038842 | 0.978962 | 0.618785 | 00:25 |
| 12 | 1.031783 | 0.970664 | 0.617995 | 00:25 |
| 13 | 1.020992 | 0.965433 | 0.620363 | 00:25 |
| 14 | 1.019145 | 0.964934 | 0.619574 | 00:25 |
| 15 | 1.023820 | 0.955554 | 0.621152 | 00:25 |
| 16 | 1.007839 | 0.963729 | 0.618785 | 00:25 |
| 17 | 1.005305 | 0.949449 | 0.628256 | 00:25 |
| 18 | 1.014119 | 0.950245 | 0.624309 | 00:25 |
| 19 | 0.999608 | 0.962622 | 0.621152 | 00:25 |
Better model found at epoch 17 with accuracy value: 0.6282557249069214. Done. Fold 1 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.235517 | 1.589072 | 0.371744 | 00:25 |
| 1 | 1.911293 | 1.311483 | 0.479084 | 00:25 |
| 2 | 1.630861 | 1.218601 | 0.529597 | 00:25 |
| 3 | 1.434478 | 1.140904 | 0.547751 | 00:25 |
| 4 | 1.307059 | 1.072369 | 0.588003 | 00:25 |
| 5 | 1.234429 | 1.050139 | 0.585635 | 00:25 |
| 6 | 1.185345 | 1.052819 | 0.596685 | 00:25 |
| 7 | 1.156201 | 1.033458 | 0.605367 | 00:25 |
| 8 | 1.148974 | 1.033625 | 0.604578 | 00:25 |
| 9 | 1.131557 | 1.026955 | 0.610103 | 00:25 |
Better model found at epoch 0 with accuracy value: 0.3717442750930786. Better model found at epoch 1 with accuracy value: 0.47908446192741394. Better model found at epoch 2 with accuracy value: 0.5295974612236023. Better model found at epoch 3 with accuracy value: 0.5477505922317505. Better model found at epoch 4 with accuracy value: 0.5880031585693359. Better model found at epoch 6 with accuracy value: 0.5966851115226746. Better model found at epoch 7 with accuracy value: 0.6053670048713684. Better model found at epoch 9 with accuracy value: 0.6101025938987732.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.132458 | 1.029743 | 0.618785 | 00:25 |
| 1 | 1.131279 | 1.077724 | 0.583268 | 00:25 |
| 2 | 1.121359 | 1.055449 | 0.602999 | 00:25 |
| 3 | 1.107346 | 1.010855 | 0.620363 | 00:25 |
| 4 | 1.103917 | 0.993638 | 0.622731 | 00:25 |
| 5 | 1.082556 | 1.018399 | 0.610892 | 00:25 |
| 6 | 1.073016 | 1.007766 | 0.625099 | 00:25 |
| 7 | 1.066606 | 0.996029 | 0.623520 | 00:25 |
| 8 | 1.057854 | 1.003608 | 0.619574 | 00:25 |
| 9 | 1.056804 | 1.000958 | 0.620363 | 00:25 |
Better model found at epoch 0 with accuracy value: 0.6187845468521118. Better model found at epoch 3 with accuracy value: 0.6203630566596985. Better model found at epoch 4 with accuracy value: 0.6227308511734009. Better model found at epoch 6 with accuracy value: 0.6250986456871033.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.060153 | 1.012557 | 0.617206 | 00:25 |
| 1 | 1.053751 | 0.998020 | 0.620363 | 00:25 |
| 2 | 1.066361 | 1.014656 | 0.614049 | 00:25 |
| 3 | 1.068550 | 1.023431 | 0.623520 | 00:25 |
| 4 | 1.069223 | 1.032246 | 0.604578 | 00:25 |
| 5 | 1.065220 | 0.997670 | 0.620363 | 00:25 |
| 6 | 1.055947 | 0.994146 | 0.606946 | 00:25 |
| 7 | 1.055414 | 1.031096 | 0.608524 | 00:25 |
| 8 | 1.050494 | 1.011551 | 0.616417 | 00:25 |
| 9 | 1.057959 | 0.997896 | 0.621152 | 00:25 |
| 10 | 1.052448 | 0.988330 | 0.624309 | 00:25 |
| 11 | 1.038003 | 1.011086 | 0.616417 | 00:25 |
| 12 | 1.034767 | 0.984762 | 0.631413 | 00:25 |
| 13 | 1.020182 | 0.985434 | 0.632202 | 00:25 |
| 14 | 1.014211 | 0.981199 | 0.632202 | 00:25 |
| 15 | 1.015450 | 0.978237 | 0.630624 | 00:25 |
| 16 | 1.012357 | 0.984901 | 0.630624 | 00:25 |
| 17 | 1.007682 | 0.976555 | 0.633781 | 00:25 |
| 18 | 1.012789 | 0.973961 | 0.633781 | 00:25 |
| 19 | 1.010057 | 0.981943 | 0.629834 | 00:25 |
Better model found at epoch 12 with accuracy value: 0.6314128041267395. Better model found at epoch 13 with accuracy value: 0.6322020292282104. Better model found at epoch 17 with accuracy value: 0.6337805986404419. Done. Fold 2 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.192167 | 1.607802 | 0.372534 | 00:25 |
| 1 | 1.918491 | 1.264872 | 0.509866 | 00:25 |
| 2 | 1.655571 | 1.161155 | 0.576164 | 00:25 |
| 3 | 1.459640 | 1.061519 | 0.566693 | 00:25 |
| 4 | 1.319673 | 1.030691 | 0.594317 | 00:25 |
| 5 | 1.229283 | 0.992595 | 0.625888 | 00:25 |
| 6 | 1.183756 | 1.008226 | 0.608524 | 00:25 |
| 7 | 1.159053 | 1.005332 | 0.607735 | 00:25 |
| 8 | 1.143366 | 0.990223 | 0.611681 | 00:25 |
| 9 | 1.122591 | 0.991095 | 0.615627 | 00:25 |
Better model found at epoch 0 with accuracy value: 0.37253352999687195. Better model found at epoch 1 with accuracy value: 0.5098658204078674. Better model found at epoch 2 with accuracy value: 0.576164186000824. Better model found at epoch 4 with accuracy value: 0.5943172574043274. Better model found at epoch 5 with accuracy value: 0.625887930393219.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.183132 | 1.005112 | 0.617206 | 00:25 |
| 1 | 1.159525 | 1.046798 | 0.583268 | 00:25 |
| 2 | 1.156878 | 0.998382 | 0.610892 | 00:25 |
| 3 | 1.123664 | 1.032303 | 0.594317 | 00:25 |
| 4 | 1.102900 | 1.014537 | 0.596685 | 00:25 |
| 5 | 1.086533 | 1.017691 | 0.598264 | 00:25 |
| 6 | 1.069825 | 1.040683 | 0.589582 | 00:25 |
| 7 | 1.069034 | 1.012620 | 0.598264 | 00:25 |
| 8 | 1.059913 | 1.018457 | 0.595107 | 00:25 |
| 9 | 1.047088 | 1.001337 | 0.603788 | 00:25 |
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.190281 | 1.001714 | 0.616417 | 00:25 |
| 1 | 1.157022 | 1.013584 | 0.613260 | 00:25 |
| 2 | 1.149163 | 1.018391 | 0.593528 | 00:25 |
| 3 | 1.132601 | 1.027458 | 0.591949 | 00:25 |
| 4 | 1.123236 | 1.047384 | 0.591949 | 00:25 |
| 5 | 1.105491 | 0.991891 | 0.614049 | 00:25 |
| 6 | 1.095494 | 0.981582 | 0.609313 | 00:25 |
| 7 | 1.086692 | 0.985812 | 0.629834 | 00:25 |
| 8 | 1.082400 | 1.004746 | 0.596685 | 00:25 |
| 9 | 1.065536 | 0.980301 | 0.620363 | 00:25 |
| 10 | 1.059012 | 0.969600 | 0.617206 | 00:25 |
| 11 | 1.060529 | 0.985991 | 0.606946 | 00:25 |
| 12 | 1.053015 | 1.001556 | 0.598264 | 00:25 |
| 13 | 1.043566 | 0.980873 | 0.609313 | 00:25 |
| 14 | 1.036411 | 0.986171 | 0.606156 | 00:25 |
| 15 | 1.027356 | 0.979548 | 0.613260 | 00:25 |
| 16 | 1.032332 | 0.978410 | 0.610892 | 00:25 |
| 17 | 1.022903 | 0.976203 | 0.604578 | 00:25 |
| 18 | 1.040216 | 0.986064 | 0.604578 | 00:25 |
| 19 | 1.031011 | 0.991278 | 0.602999 | 00:25 |
Better model found at epoch 7 with accuracy value: 0.6298342347145081. Done. Fold 3 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.199070 | 1.601464 | 0.385951 | 00:25 |
| 1 | 1.907139 | 1.516839 | 0.393054 | 00:25 |
| 2 | 1.636967 | 1.224306 | 0.546172 | 00:25 |
| 3 | 1.442896 | 1.086439 | 0.572218 | 00:25 |
| 4 | 1.306612 | 1.068865 | 0.594317 | 00:25 |
| 5 | 1.244442 | 1.048904 | 0.592739 | 00:25 |
| 6 | 1.173175 | 1.035960 | 0.617995 | 00:25 |
| 7 | 1.161078 | 1.016915 | 0.621152 | 00:25 |
| 8 | 1.135560 | 1.031766 | 0.614838 | 00:25 |
| 9 | 1.135011 | 1.013242 | 0.619574 | 00:25 |
Better model found at epoch 0 with accuracy value: 0.38595107197761536. Better model found at epoch 1 with accuracy value: 0.39305445551872253. Better model found at epoch 2 with accuracy value: 0.5461720824241638. Better model found at epoch 3 with accuracy value: 0.5722178220748901. Better model found at epoch 4 with accuracy value: 0.5943172574043274. Better model found at epoch 6 with accuracy value: 0.6179952621459961. Better model found at epoch 7 with accuracy value: 0.6211523413658142.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.144372 | 1.024360 | 0.620363 | 00:25 |
| 1 | 1.126831 | 1.015553 | 0.627466 | 00:25 |
Better model found at epoch 1 with accuracy value: 0.6274664402008057.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.144372 | 1.024360 | 0.620363 | 00:25 |
| 1 | 1.126831 | 1.015553 | 0.627466 | 00:25 |
| 2 | 1.118229 | 1.110919 | 0.576953 | 00:25 |
| 3 | 1.110242 | 1.023886 | 0.602999 | 00:25 |
| 4 | 1.102328 | 1.014616 | 0.606946 | 00:25 |
| 5 | 1.083611 | 0.991508 | 0.614838 | 00:25 |
| 6 | 1.084063 | 0.981845 | 0.614049 | 00:25 |
| 7 | 1.068733 | 0.982798 | 0.609313 | 00:25 |
| 8 | 1.052268 | 0.980851 | 0.611681 | 00:25 |
| 9 | 1.061891 | 0.987935 | 0.610892 | 00:25 |
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.124089 | 1.009481 | 0.624309 | 00:25 |
| 1 | 1.119977 | 1.015595 | 0.617995 | 00:25 |
| 2 | 1.103107 | 0.999917 | 0.611681 | 00:25 |
| 3 | 1.105830 | 1.008078 | 0.621152 | 00:25 |
| 4 | 1.104212 | 0.985159 | 0.623520 | 00:25 |
| 5 | 1.089156 | 0.985600 | 0.621152 | 00:25 |
| 6 | 1.085182 | 1.012687 | 0.603788 | 00:25 |
| 7 | 1.078272 | 1.003837 | 0.614838 | 00:25 |
| 8 | 1.058939 | 1.005220 | 0.614838 | 00:25 |
| 9 | 1.064392 | 0.974047 | 0.621152 | 00:25 |
| 10 | 1.062971 | 0.963217 | 0.629834 | 00:25 |
| 11 | 1.051720 | 0.975645 | 0.618785 | 00:25 |
| 12 | 1.038297 | 0.985376 | 0.618785 | 00:25 |
| 13 | 1.022750 | 0.993750 | 0.615627 | 00:25 |
| 14 | 1.041870 | 0.983347 | 0.615627 | 00:25 |
| 15 | 1.035282 | 0.969584 | 0.628256 | 00:25 |
| 16 | 1.023992 | 0.969356 | 0.623520 | 00:25 |
| 17 | 1.027435 | 0.968060 | 0.622731 | 00:25 |
| 18 | 1.025964 | 0.973094 | 0.620363 | 00:25 |
| 19 | 1.024504 | 0.977706 | 0.622731 | 00:25 |
Better model found at epoch 10 with accuracy value: 0.6298342347145081. Done. Fold 4 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.256476 | 1.512802 | 0.389415 | 00:25 |
| 1 | 1.904163 | 1.325793 | 0.492101 | 00:25 |
| 2 | 1.636944 | 1.250400 | 0.529226 | 00:25 |
| 3 | 1.430055 | 1.145097 | 0.543444 | 00:25 |
| 4 | 1.293298 | 1.075116 | 0.578199 | 00:25 |
| 5 | 1.217415 | 1.078131 | 0.571090 | 00:25 |
| 6 | 1.174639 | 1.043897 | 0.581359 | 00:25 |
| 7 | 1.155476 | 1.040792 | 0.590047 | 00:25 |
| 8 | 1.140831 | 1.024644 | 0.601106 | 00:25 |
| 9 | 1.116905 | 1.025880 | 0.602686 | 00:25 |
Better model found at epoch 0 with accuracy value: 0.3894154727458954. Better model found at epoch 1 with accuracy value: 0.49210110306739807. Better model found at epoch 2 with accuracy value: 0.5292258858680725. Better model found at epoch 3 with accuracy value: 0.5434439182281494. Better model found at epoch 4 with accuracy value: 0.578199028968811. Better model found at epoch 6 with accuracy value: 0.5813586115837097. Better model found at epoch 7 with accuracy value: 0.5900474190711975. Better model found at epoch 8 with accuracy value: 0.6011058688163757. Better model found at epoch 9 with accuracy value: 0.6026856303215027.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.137233 | 1.027365 | 0.607425 | 00:25 |
| 1 | 1.122276 | 1.044981 | 0.596367 | 00:25 |
| 2 | 1.120769 | 1.083054 | 0.597156 | 00:25 |
| 3 | 1.112354 | 1.011656 | 0.612954 | 00:25 |
| 4 | 1.109588 | 0.982951 | 0.627962 | 00:25 |
| 5 | 1.077719 | 1.006319 | 0.611374 | 00:25 |
| 6 | 1.078218 | 0.970834 | 0.627962 | 00:25 |
| 7 | 1.061615 | 0.977131 | 0.627962 | 00:25 |
| 8 | 1.053669 | 0.971338 | 0.627962 | 00:25 |
| 9 | 1.049749 | 0.983110 | 0.626382 | 00:25 |
Better model found at epoch 0 with accuracy value: 0.6074249744415283. Better model found at epoch 3 with accuracy value: 0.6129541993141174. Better model found at epoch 4 with accuracy value: 0.6279621124267578.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.080091 | 0.998627 | 0.614534 | 00:25 |
| 1 | 1.064752 | 0.986786 | 0.624013 | 00:25 |
| 2 | 1.064941 | 1.017184 | 0.602686 | 00:25 |
| 3 | 1.066792 | 0.987229 | 0.610584 | 00:25 |
| 4 | 1.070226 | 1.006477 | 0.623223 | 00:25 |
| 5 | 1.066195 | 1.041618 | 0.606635 | 00:25 |
| 6 | 1.058264 | 1.013997 | 0.609795 | 00:25 |
| 7 | 1.071260 | 1.040552 | 0.601106 | 00:25 |
| 8 | 1.062253 | 1.054215 | 0.597156 | 00:25 |
| 9 | 1.064196 | 1.010514 | 0.604265 | 00:25 |
| 10 | 1.051630 | 0.941206 | 0.642970 | 00:25 |
| 11 | 1.041221 | 0.958860 | 0.630332 | 00:25 |
| 12 | 1.040786 | 0.976533 | 0.624803 | 00:25 |
| 13 | 1.025025 | 0.954057 | 0.631122 | 00:25 |
| 14 | 1.037696 | 0.936135 | 0.640600 | 00:25 |
| 15 | 1.036540 | 0.939876 | 0.631912 | 00:25 |
| 16 | 1.027758 | 0.941670 | 0.632701 | 00:25 |
| 17 | 1.018483 | 0.936073 | 0.638231 | 00:25 |
| 18 | 1.030725 | 0.933316 | 0.638231 | 00:25 |
| 19 | 1.019870 | 0.938174 | 0.634281 | 00:25 |
Better model found at epoch 10 with accuracy value: 0.6429699659347534. Done.
Using the ensemble of all these models is a significant imporvement - it's got score of 0.342 in the public leaderboard. Let's try to add the lung annotated version of these model to the pot.
Resnet18 Lung Annotated 5-folds
for fold in range(nfolds):
print(f'Fold {fold}')
train_df['current_val_fold'] = train_df.val_fold == fold
dls = ImageDataLoaders.from_df(train_df, '/content/lung_ann/content/ann_cxr/', fn_col='image_fn', label_col='study_label', bs=128, seed=seed, valid_col='current_val_fold',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
learn = cnn_learner(dls, resnet18, metrics=metrics, cbs=[MixUp(), SaveModelCallback('accuracy', fname=f'resnet18_ann_fold_{fold}', reset_on_fit=False)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/')
random_seed(seed)
print('Training...')
learn.fit_one_cycle(10, lr)
learn.fit_one_cycle(10, lr)
learn.fit_one_cycle(20, lr)
learn.load(f'resnet18_ann_fold_{fold}') #load best model
learn.export(f'resnet18_ann_fold_{fold}.pkl')
print('Done.')
Fold 0 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.237028 | 1.457613 | 0.423047 | 00:15 |
| 1 | 2.005970 | 1.287756 | 0.499605 | 00:15 |
| 2 | 1.790800 | 1.150744 | 0.575375 | 00:14 |
| 3 | 1.600239 | 1.079062 | 0.586425 | 00:15 |
| 4 | 1.470649 | 1.096299 | 0.586425 | 00:14 |
| 5 | 1.369671 | 1.056554 | 0.599842 | 00:14 |
| 6 | 1.305756 | 1.050366 | 0.597474 | 00:14 |
| 7 | 1.260641 | 1.041989 | 0.596685 | 00:14 |
| 8 | 1.230181 | 1.040788 | 0.596685 | 00:14 |
| 9 | 1.214047 | 1.042391 | 0.595896 | 00:15 |
Better model found at epoch 0 with accuracy value: 0.4230465590953827. Better model found at epoch 1 with accuracy value: 0.49960535764694214. Better model found at epoch 2 with accuracy value: 0.5753749012947083. Better model found at epoch 3 with accuracy value: 0.5864246487617493. Better model found at epoch 5 with accuracy value: 0.5998421311378479.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.247398 | 1.048126 | 0.598264 | 00:15 |
| 1 | 1.241044 | 1.070703 | 0.595107 | 00:15 |
| 2 | 1.224881 | 1.033238 | 0.591949 | 00:15 |
| 3 | 1.189975 | 1.024038 | 0.595107 | 00:15 |
| 4 | 1.159704 | 1.021845 | 0.602210 | 00:14 |
| 5 | 1.143724 | 1.009636 | 0.604578 | 00:15 |
| 6 | 1.121943 | 1.002951 | 0.607735 | 00:14 |
| 7 | 1.112716 | 0.998557 | 0.607735 | 00:15 |
| 8 | 1.095938 | 0.997393 | 0.609313 | 00:14 |
| 9 | 1.095875 | 0.996677 | 0.610892 | 00:14 |
Better model found at epoch 4 with accuracy value: 0.6022099256515503. Better model found at epoch 5 with accuracy value: 0.6045777201652527. Better model found at epoch 6 with accuracy value: 0.6077347993850708. Better model found at epoch 8 with accuracy value: 0.6093133091926575. Better model found at epoch 9 with accuracy value: 0.6108918786048889.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.088843 | 0.991313 | 0.612470 | 00:15 |
| 1 | 1.094248 | 0.994824 | 0.611681 | 00:14 |
| 2 | 1.088900 | 0.994727 | 0.613260 | 00:15 |
| 3 | 1.088587 | 0.985635 | 0.614838 | 00:15 |
| 4 | 1.083624 | 1.000710 | 0.612470 | 00:14 |
| 5 | 1.081958 | 0.996613 | 0.620363 | 00:14 |
| 6 | 1.076530 | 0.996029 | 0.619574 | 00:14 |
| 7 | 1.069690 | 0.981255 | 0.617206 | 00:14 |
| 8 | 1.064353 | 0.975639 | 0.617995 | 00:14 |
| 9 | 1.067019 | 0.981098 | 0.612470 | 00:14 |
| 10 | 1.059972 | 0.980225 | 0.621152 | 00:15 |
| 11 | 1.048669 | 0.963401 | 0.625099 | 00:14 |
| 12 | 1.045197 | 0.962050 | 0.621942 | 00:15 |
| 13 | 1.053313 | 0.960501 | 0.625888 | 00:14 |
| 14 | 1.045528 | 0.959900 | 0.621942 | 00:15 |
| 15 | 1.043669 | 0.954884 | 0.627466 | 00:15 |
| 16 | 1.041976 | 0.952826 | 0.627466 | 00:15 |
| 17 | 1.043047 | 0.953546 | 0.625099 | 00:14 |
| 18 | 1.043145 | 0.955327 | 0.629045 | 00:15 |
| 19 | 1.038479 | 0.953145 | 0.625888 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.6124703884124756. Better model found at epoch 2 with accuracy value: 0.6132596731185913. Better model found at epoch 3 with accuracy value: 0.614838182926178. Better model found at epoch 5 with accuracy value: 0.6203630566596985. Better model found at epoch 10 with accuracy value: 0.6211523413658142. Better model found at epoch 11 with accuracy value: 0.6250986456871033. Better model found at epoch 13 with accuracy value: 0.625887930393219. Better model found at epoch 15 with accuracy value: 0.6274664402008057. Better model found at epoch 18 with accuracy value: 0.6290450096130371. Done. Fold 1 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.239647 | 1.523592 | 0.401736 | 00:14 |
| 1 | 1.983648 | 1.278028 | 0.498027 | 00:14 |
| 2 | 1.765474 | 1.134132 | 0.548540 | 00:15 |
| 3 | 1.611617 | 1.121699 | 0.554065 | 00:15 |
| 4 | 1.484394 | 1.089022 | 0.584846 | 00:15 |
| 5 | 1.378294 | 1.049262 | 0.595107 | 00:14 |
| 6 | 1.304728 | 1.044652 | 0.593528 | 00:14 |
| 7 | 1.268483 | 1.033315 | 0.595107 | 00:14 |
| 8 | 1.235919 | 1.037927 | 0.590371 | 00:15 |
| 9 | 1.218186 | 1.038241 | 0.592739 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.40173637866973877. Better model found at epoch 1 with accuracy value: 0.49802684783935547. Better model found at epoch 2 with accuracy value: 0.5485398769378662. Better model found at epoch 3 with accuracy value: 0.5540646910667419. Better model found at epoch 4 with accuracy value: 0.5848460793495178. Better model found at epoch 5 with accuracy value: 0.5951065421104431.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.265217 | 1.038098 | 0.600631 | 00:14 |
| 1 | 1.240889 | 1.053972 | 0.600631 | 00:14 |
| 2 | 1.209629 | 1.028423 | 0.613260 | 00:14 |
| 3 | 1.188690 | 1.022250 | 0.606156 | 00:15 |
| 4 | 1.171322 | 1.014402 | 0.616417 | 00:15 |
| 5 | 1.135420 | 0.994483 | 0.617995 | 00:15 |
| 6 | 1.121972 | 1.004164 | 0.625099 | 00:14 |
| 7 | 1.108761 | 0.999677 | 0.623520 | 00:15 |
| 8 | 1.102089 | 1.000486 | 0.623520 | 00:15 |
| 9 | 1.096284 | 0.995399 | 0.625099 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.6006314158439636. Better model found at epoch 2 with accuracy value: 0.6132596731185913. Better model found at epoch 4 with accuracy value: 0.6164167523384094. Better model found at epoch 5 with accuracy value: 0.6179952621459961. Better model found at epoch 6 with accuracy value: 0.6250986456871033.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.100730 | 1.002515 | 0.621152 | 00:15 |
| 1 | 1.101057 | 0.994272 | 0.625099 | 00:14 |
| 2 | 1.096202 | 0.988600 | 0.619574 | 00:14 |
| 3 | 1.093533 | 0.995527 | 0.622731 | 00:14 |
| 4 | 1.093181 | 1.005092 | 0.625099 | 00:14 |
| 5 | 1.087276 | 0.994782 | 0.625888 | 00:15 |
| 6 | 1.079061 | 0.967128 | 0.632991 | 00:15 |
| 7 | 1.070479 | 0.977005 | 0.633781 | 00:15 |
| 8 | 1.070608 | 0.993144 | 0.624309 | 00:14 |
| 9 | 1.064490 | 0.977914 | 0.632202 | 00:14 |
| 10 | 1.061715 | 0.971709 | 0.635359 | 00:15 |
| 11 | 1.059843 | 0.961256 | 0.634570 | 00:15 |
| 12 | 1.055895 | 0.951121 | 0.640884 | 00:15 |
| 13 | 1.049719 | 0.965699 | 0.637727 | 00:15 |
| 14 | 1.050539 | 0.963608 | 0.636148 | 00:15 |
| 15 | 1.045013 | 0.953037 | 0.640884 | 00:14 |
| 16 | 1.040625 | 0.956174 | 0.639305 | 00:15 |
| 17 | 1.043858 | 0.955480 | 0.640884 | 00:15 |
| 18 | 1.040384 | 0.952631 | 0.641673 | 00:14 |
| 19 | 1.036854 | 0.954133 | 0.640095 | 00:14 |
Better model found at epoch 5 with accuracy value: 0.625887930393219. Better model found at epoch 6 with accuracy value: 0.6329913139343262. Better model found at epoch 7 with accuracy value: 0.6337805986404419. Better model found at epoch 10 with accuracy value: 0.6353591084480286. Better model found at epoch 12 with accuracy value: 0.6408839821815491. Better model found at epoch 18 with accuracy value: 0.6416732668876648. Done. Fold 2 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.259536 | 1.422615 | 0.455406 | 00:14 |
| 1 | 2.011620 | 1.265033 | 0.523283 | 00:15 |
| 2 | 1.802588 | 1.142860 | 0.561957 | 00:15 |
| 3 | 1.615768 | 1.086178 | 0.584846 | 00:14 |
| 4 | 1.473160 | 1.061084 | 0.588792 | 00:15 |
| 5 | 1.382665 | 1.043801 | 0.581689 | 00:14 |
| 6 | 1.315352 | 1.018217 | 0.602210 | 00:14 |
| 7 | 1.270930 | 1.009172 | 0.604578 | 00:14 |
| 8 | 1.232624 | 1.006826 | 0.607735 | 00:14 |
| 9 | 1.214664 | 1.005999 | 0.607735 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.45540645718574524. Better model found at epoch 1 with accuracy value: 0.5232833623886108. Better model found at epoch 2 with accuracy value: 0.5619573593139648. Better model found at epoch 3 with accuracy value: 0.5848460793495178. Better model found at epoch 4 with accuracy value: 0.5887924432754517. Better model found at epoch 6 with accuracy value: 0.6022099256515503. Better model found at epoch 7 with accuracy value: 0.6045777201652527. Better model found at epoch 8 with accuracy value: 0.6077347993850708.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.219483 | 0.999777 | 0.612470 | 00:14 |
| 1 | 1.200422 | 1.026304 | 0.588003 | 00:15 |
| 2 | 1.189951 | 0.999949 | 0.612470 | 00:14 |
| 3 | 1.163698 | 0.985818 | 0.617995 | 00:14 |
| 4 | 1.142187 | 0.975809 | 0.623520 | 00:14 |
| 5 | 1.119571 | 0.981066 | 0.620363 | 00:14 |
| 6 | 1.102327 | 0.972258 | 0.625888 | 00:14 |
| 7 | 1.098383 | 0.970311 | 0.625099 | 00:14 |
| 8 | 1.086761 | 0.970965 | 0.624309 | 00:14 |
| 9 | 1.090981 | 0.968915 | 0.626677 | 00:15 |
Better model found at epoch 0 with accuracy value: 0.6124703884124756. Better model found at epoch 3 with accuracy value: 0.6179952621459961. Better model found at epoch 4 with accuracy value: 0.6235201358795166. Better model found at epoch 6 with accuracy value: 0.625887930393219. Better model found at epoch 9 with accuracy value: 0.6266772150993347.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.087717 | 0.967165 | 0.626677 | 00:15 |
| 1 | 1.078591 | 0.973615 | 0.624309 | 00:14 |
| 2 | 1.082432 | 0.963878 | 0.629834 | 00:14 |
| 3 | 1.083198 | 0.977569 | 0.611681 | 00:15 |
| 4 | 1.087385 | 0.963257 | 0.632991 | 00:14 |
| 5 | 1.082487 | 0.975096 | 0.625099 | 00:14 |
| 6 | 1.078947 | 0.970891 | 0.622731 | 00:15 |
| 7 | 1.071663 | 0.982088 | 0.625099 | 00:14 |
| 8 | 1.069831 | 0.963982 | 0.630624 | 00:14 |
| 9 | 1.068072 | 0.953159 | 0.625888 | 00:14 |
| 10 | 1.062016 | 0.961971 | 0.629834 | 00:14 |
| 11 | 1.056213 | 0.955252 | 0.634570 | 00:14 |
| 12 | 1.055175 | 0.954052 | 0.633781 | 00:15 |
| 13 | 1.052308 | 0.953465 | 0.634570 | 00:15 |
| 14 | 1.044962 | 0.945274 | 0.638516 | 00:14 |
| 15 | 1.033538 | 0.947244 | 0.636938 | 00:15 |
| 16 | 1.035177 | 0.943843 | 0.638516 | 00:14 |
| 17 | 1.035885 | 0.943496 | 0.637727 | 00:15 |
| 18 | 1.037163 | 0.943838 | 0.638516 | 00:15 |
| 19 | 1.038849 | 0.945661 | 0.635359 | 00:15 |
Better model found at epoch 2 with accuracy value: 0.6298342347145081. Better model found at epoch 4 with accuracy value: 0.6329913139343262. Better model found at epoch 11 with accuracy value: 0.6345698237419128. Better model found at epoch 14 with accuracy value: 0.6385161876678467. Done. Fold 3 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.262327 | 1.412552 | 0.456985 | 00:15 |
| 1 | 2.004897 | 1.206750 | 0.537490 | 00:14 |
| 2 | 1.799374 | 1.192209 | 0.517758 | 00:15 |
| 3 | 1.620904 | 1.061047 | 0.587214 | 00:14 |
| 4 | 1.492836 | 1.035690 | 0.607735 | 00:14 |
| 5 | 1.391925 | 1.047489 | 0.591949 | 00:15 |
| 6 | 1.324623 | 1.003708 | 0.607735 | 00:14 |
| 7 | 1.278229 | 1.001693 | 0.610892 | 00:14 |
| 8 | 1.233637 | 1.002645 | 0.608524 | 00:14 |
| 9 | 1.219073 | 1.002982 | 0.609313 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.4569849967956543. Better model found at epoch 1 with accuracy value: 0.5374901294708252. Better model found at epoch 3 with accuracy value: 0.5872138738632202. Better model found at epoch 4 with accuracy value: 0.6077347993850708. Better model found at epoch 7 with accuracy value: 0.6108918786048889.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.212861 | 1.000544 | 0.611681 | 00:14 |
| 1 | 1.204065 | 1.009491 | 0.622731 | 00:15 |
| 2 | 1.196712 | 0.985777 | 0.626677 | 00:15 |
| 3 | 1.172543 | 1.006143 | 0.612470 | 00:14 |
| 4 | 1.156599 | 0.977756 | 0.619574 | 00:15 |
| 5 | 1.135836 | 0.979242 | 0.624309 | 00:15 |
| 6 | 1.114483 | 0.970968 | 0.621152 | 00:15 |
| 7 | 1.107795 | 0.970012 | 0.621942 | 00:14 |
| 8 | 1.104261 | 0.968516 | 0.622731 | 00:14 |
| 9 | 1.100477 | 0.971237 | 0.622731 | 00:14 |
Better model found at epoch 0 with accuracy value: 0.6116811633110046. Better model found at epoch 1 with accuracy value: 0.6227308511734009. Better model found at epoch 2 with accuracy value: 0.6266772150993347.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.159183 | 0.985522 | 0.622731 | 00:14 |
| 1 | 1.149433 | 0.980990 | 0.629045 | 00:15 |
| 2 | 1.135430 | 0.983119 | 0.621152 | 00:15 |
| 3 | 1.133054 | 0.992202 | 0.608524 | 00:14 |
| 4 | 1.120037 | 0.985035 | 0.621942 | 00:14 |
| 5 | 1.107844 | 0.970617 | 0.629045 | 00:14 |
| 6 | 1.109687 | 0.959678 | 0.638516 | 00:15 |
| 7 | 1.096424 | 0.948783 | 0.630624 | 00:14 |
| 8 | 1.089560 | 0.969129 | 0.632991 | 00:15 |
| 9 | 1.085088 | 0.956780 | 0.626677 | 00:14 |
| 10 | 1.076680 | 0.943157 | 0.631413 | 00:15 |
| 11 | 1.069558 | 0.944122 | 0.636148 | 00:15 |
| 12 | 1.068501 | 0.942063 | 0.631413 | 00:14 |
| 13 | 1.056772 | 0.942410 | 0.634570 | 00:14 |
| 14 | 1.051593 | 0.932911 | 0.640884 | 00:14 |
| 15 | 1.048424 | 0.932405 | 0.635359 | 00:15 |
| 16 | 1.042670 | 0.932113 | 0.636148 | 00:15 |
| 17 | 1.043591 | 0.930311 | 0.636148 | 00:14 |
| 18 | 1.038961 | 0.929954 | 0.635359 | 00:14 |
| 19 | 1.035373 | 0.931046 | 0.636148 | 00:15 |
Better model found at epoch 1 with accuracy value: 0.6290450096130371. Better model found at epoch 6 with accuracy value: 0.6385161876678467. Better model found at epoch 14 with accuracy value: 0.6408839821815491. Done. Fold 4 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.244611 | 1.528653 | 0.412322 | 00:15 |
| 1 | 2.000252 | 1.311453 | 0.494471 | 00:15 |
| 2 | 1.788661 | 1.132851 | 0.561611 | 00:14 |
| 3 | 1.622675 | 1.091156 | 0.597156 | 00:14 |
| 4 | 1.486402 | 1.079034 | 0.589257 | 00:15 |
| 5 | 1.390672 | 1.054176 | 0.599526 | 00:14 |
| 6 | 1.310692 | 1.026871 | 0.614534 | 00:14 |
| 7 | 1.266693 | 1.023468 | 0.614534 | 00:15 |
| 8 | 1.240914 | 1.016933 | 0.615324 | 00:14 |
| 9 | 1.223765 | 1.016068 | 0.614534 | 00:15 |
Better model found at epoch 0 with accuracy value: 0.4123222827911377. Better model found at epoch 1 with accuracy value: 0.4944707751274109. Better model found at epoch 2 with accuracy value: 0.5616113543510437. Better model found at epoch 3 with accuracy value: 0.5971564054489136. Better model found at epoch 5 with accuracy value: 0.599526047706604. Better model found at epoch 6 with accuracy value: 0.6145339608192444. Better model found at epoch 8 with accuracy value: 0.6153238415718079.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.209523 | 1.009892 | 0.616904 | 00:15 |
| 1 | 1.204532 | 1.002420 | 0.615324 | 00:15 |
| 2 | 1.191105 | 0.982931 | 0.620063 | 00:15 |
| 3 | 1.160640 | 0.990418 | 0.624803 | 00:15 |
| 4 | 1.141727 | 0.973118 | 0.631122 | 00:15 |
| 5 | 1.121299 | 0.964844 | 0.635071 | 00:14 |
| 6 | 1.111813 | 0.958326 | 0.632701 | 00:15 |
| 7 | 1.098314 | 0.956512 | 0.627962 | 00:15 |
| 8 | 1.098397 | 0.957661 | 0.630332 | 00:15 |
| 9 | 1.090573 | 0.955346 | 0.628752 | 00:15 |
Better model found at epoch 0 with accuracy value: 0.6169036626815796. Better model found at epoch 2 with accuracy value: 0.6200631856918335. Better model found at epoch 3 with accuracy value: 0.6248025298118591. Better model found at epoch 4 with accuracy value: 0.6311216354370117. Better model found at epoch 5 with accuracy value: 0.6350710988044739.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.118069 | 0.960477 | 0.635071 | 00:15 |
| 1 | 1.108627 | 0.957682 | 0.635071 | 00:14 |
| 2 | 1.097505 | 0.956380 | 0.634281 | 00:14 |
| 3 | 1.095148 | 0.955877 | 0.635071 | 00:15 |
| 4 | 1.090379 | 0.966775 | 0.628752 | 00:15 |
| 5 | 1.088189 | 0.959265 | 0.633491 | 00:15 |
| 6 | 1.087619 | 0.952812 | 0.630332 | 00:15 |
| 7 | 1.079178 | 0.951993 | 0.624803 | 00:15 |
| 8 | 1.075722 | 0.925925 | 0.645340 | 00:15 |
| 9 | 1.067746 | 0.949280 | 0.629542 | 00:15 |
| 10 | 1.066600 | 0.941532 | 0.639810 | 00:14 |
Better model found at epoch 8 with accuracy value: 0.6453396677970886.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.118069 | 0.960477 | 0.635071 | 00:15 |
| 1 | 1.108627 | 0.957682 | 0.635071 | 00:14 |
| 2 | 1.097505 | 0.956380 | 0.634281 | 00:14 |
| 3 | 1.095148 | 0.955877 | 0.635071 | 00:15 |
| 4 | 1.090379 | 0.966775 | 0.628752 | 00:15 |
| 5 | 1.088189 | 0.959265 | 0.633491 | 00:15 |
| 6 | 1.087619 | 0.952812 | 0.630332 | 00:15 |
| 7 | 1.079178 | 0.951993 | 0.624803 | 00:15 |
| 8 | 1.075722 | 0.925925 | 0.645340 | 00:15 |
| 9 | 1.067746 | 0.949280 | 0.629542 | 00:15 |
| 10 | 1.066600 | 0.941532 | 0.639810 | 00:14 |
| 11 | 1.055355 | 0.921151 | 0.646919 | 00:14 |
| 12 | 1.055846 | 0.918667 | 0.646919 | 00:15 |
| 13 | 1.059497 | 0.923488 | 0.644550 | 00:15 |
| 14 | 1.051665 | 0.922915 | 0.642180 | 00:15 |
| 15 | 1.052052 | 0.918623 | 0.646130 | 00:15 |
| 16 | 1.056029 | 0.916042 | 0.639810 | 00:15 |
| 17 | 1.056360 | 0.916281 | 0.643760 | 00:14 |
| 18 | 1.051830 | 0.918378 | 0.646130 | 00:14 |
| 19 | 1.044783 | 0.917105 | 0.646919 | 00:14 |
Better model found at epoch 11 with accuracy value: 0.6469194293022156. Done.
for fold in range(nfolds):
print(f'Fold {fold}')
train_df['current_val_fold'] = train_df.val_fold == fold
dls = ImageDataLoaders.from_df(train_df, '/content/lung_ann/content/ann_cxr/', fn_col='image_fn', label_col='study_label', bs=128, seed=seed, valid_col='current_val_fold',
batch_tfms=aug_transforms(
mult=1.3,
max_rotate=25,
min_zoom=.9,
max_zoom=1.3,
max_lighting=.4,
max_warp=.3
))
learn = cnn_learner(dls, resnet34, metrics=metrics, cbs=[MixUp(), SaveModelCallback('accuracy', fname=f'resnet34_ann_fold_{fold}', reset_on_fit=False)],
path='/content/gdrive/MyDrive/covid19-detection/project-models/')
random_seed(seed)
print('Training...')
learn.fit_one_cycle(10, lr)
learn.fit_one_cycle(10, lr)
learn.fit_one_cycle(20, lr)
learn.load(f'resnet34_ann_fold_{fold}') #load best model
learn.export(f'resnet34_ann_fold_{fold}.pkl')
print('Done.')
Fold 0 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.255134 | 1.369144 | 0.455406 | 00:24 |
| 1 | 2.042156 | 1.235394 | 0.535122 | 00:24 |
| 2 | 1.812929 | 1.134472 | 0.584057 | 00:24 |
| 3 | 1.636160 | 1.104401 | 0.596685 | 00:24 |
| 4 | 1.498461 | 1.084943 | 0.595896 | 00:24 |
| 5 | 1.393923 | 1.041812 | 0.597474 | 00:24 |
| 6 | 1.327748 | 1.035429 | 0.604578 | 00:24 |
| 7 | 1.286115 | 1.026783 | 0.599053 | 00:24 |
| 8 | 1.249744 | 1.026357 | 0.601421 | 00:24 |
| 9 | 1.224982 | 1.025222 | 0.605367 | 00:24 |
Better model found at epoch 0 with accuracy value: 0.45540645718574524. Better model found at epoch 1 with accuracy value: 0.5351223349571228. Better model found at epoch 2 with accuracy value: 0.5840568542480469. Better model found at epoch 3 with accuracy value: 0.5966851115226746. Better model found at epoch 5 with accuracy value: 0.5974743366241455. Better model found at epoch 6 with accuracy value: 0.6045777201652527. Better model found at epoch 9 with accuracy value: 0.6053670048713684.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.208238 | 1.023672 | 0.602999 | 00:24 |
| 1 | 1.211033 | 1.075028 | 0.596685 | 00:24 |
| 2 | 1.194134 | 1.030027 | 0.607735 | 00:24 |
| 3 | 1.166628 | 1.010624 | 0.604578 | 00:24 |
| 4 | 1.143425 | 1.007962 | 0.607735 | 00:24 |
| 5 | 1.128841 | 0.998547 | 0.609313 | 00:24 |
| 6 | 1.112710 | 0.994732 | 0.606156 | 00:24 |
| 7 | 1.103441 | 0.991234 | 0.604578 | 00:24 |
| 8 | 1.090623 | 0.986268 | 0.607735 | 00:24 |
| 9 | 1.084118 | 0.986558 | 0.606946 | 00:24 |
Better model found at epoch 2 with accuracy value: 0.6077347993850708. Better model found at epoch 5 with accuracy value: 0.6093133091926575.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.096640 | 0.990299 | 0.614049 | 00:24 |
| 1 | 1.093376 | 0.992216 | 0.607735 | 00:24 |
| 2 | 1.089498 | 0.981720 | 0.609313 | 00:24 |
| 3 | 1.087825 | 0.986179 | 0.617206 | 00:24 |
| 4 | 1.085086 | 0.988908 | 0.609313 | 00:24 |
| 5 | 1.079509 | 0.997544 | 0.612470 | 00:24 |
| 6 | 1.075347 | 0.987415 | 0.617206 | 00:24 |
| 7 | 1.065614 | 0.967906 | 0.617206 | 00:24 |
| 8 | 1.062245 | 0.968866 | 0.627466 | 00:24 |
| 9 | 1.055328 | 0.977457 | 0.621152 | 00:24 |
| 10 | 1.052387 | 0.969015 | 0.621942 | 00:24 |
| 11 | 1.040229 | 0.958490 | 0.630624 | 00:24 |
| 12 | 1.039233 | 0.954197 | 0.622731 | 00:24 |
| 13 | 1.042616 | 0.952115 | 0.623520 | 00:24 |
| 14 | 1.033523 | 0.947644 | 0.631413 | 00:24 |
| 15 | 1.037319 | 0.944815 | 0.632991 | 00:24 |
| 16 | 1.036302 | 0.944527 | 0.629045 | 00:24 |
| 17 | 1.033156 | 0.945694 | 0.627466 | 00:24 |
| 18 | 1.032811 | 0.947977 | 0.626677 | 00:24 |
| 19 | 1.027988 | 0.946033 | 0.626677 | 00:24 |
Better model found at epoch 0 with accuracy value: 0.614048957824707. Better model found at epoch 3 with accuracy value: 0.6172059774398804. Better model found at epoch 8 with accuracy value: 0.6274664402008057. Better model found at epoch 11 with accuracy value: 0.6306235194206238. Better model found at epoch 14 with accuracy value: 0.6314128041267395. Better model found at epoch 15 with accuracy value: 0.6329913139343262. Done. Fold 1 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.224711 | 1.557302 | 0.411997 | 00:24 |
| 1 | 2.005794 | 1.249574 | 0.519337 | 00:24 |
| 2 | 1.779810 | 1.200522 | 0.535122 | 00:24 |
| 3 | 1.619759 | 1.148483 | 0.547751 | 00:24 |
| 4 | 1.499091 | 1.097073 | 0.570639 | 00:24 |
| 5 | 1.395772 | 1.070730 | 0.591160 | 00:24 |
| 6 | 1.327868 | 1.061864 | 0.580900 | 00:24 |
| 7 | 1.276394 | 1.042853 | 0.594317 | 00:24 |
| 8 | 1.241380 | 1.039880 | 0.596685 | 00:24 |
| 9 | 1.228804 | 1.038988 | 0.592739 | 00:24 |
Better model found at epoch 0 with accuracy value: 0.41199684143066406. Better model found at epoch 1 with accuracy value: 0.519336998462677. Better model found at epoch 2 with accuracy value: 0.5351223349571228. Better model found at epoch 3 with accuracy value: 0.5477505922317505. Better model found at epoch 4 with accuracy value: 0.5706393122673035. Better model found at epoch 5 with accuracy value: 0.591160237789154. Better model found at epoch 7 with accuracy value: 0.5943172574043274. Better model found at epoch 8 with accuracy value: 0.5966851115226746.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.209337 | 1.036028 | 0.599053 | 00:24 |
| 1 | 1.204559 | 1.027038 | 0.607735 | 00:24 |
| 2 | 1.181852 | 1.032433 | 0.595896 | 00:24 |
| 3 | 1.164912 | 1.028123 | 0.610103 | 00:24 |
| 4 | 1.146108 | 1.007662 | 0.623520 | 00:24 |
| 5 | 1.130537 | 0.999928 | 0.621942 | 00:24 |
| 6 | 1.109624 | 1.010771 | 0.611681 | 00:24 |
| 7 | 1.097322 | 0.997270 | 0.614049 | 00:24 |
| 8 | 1.097464 | 0.997544 | 0.614838 | 00:24 |
| 9 | 1.091984 | 0.995783 | 0.615627 | 00:24 |
Better model found at epoch 0 with accuracy value: 0.599052906036377. Better model found at epoch 1 with accuracy value: 0.6077347993850708. Better model found at epoch 3 with accuracy value: 0.6101025938987732. Better model found at epoch 4 with accuracy value: 0.6235201358795166.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.110538 | 1.010969 | 0.617206 | 00:24 |
| 1 | 1.110837 | 0.999319 | 0.625099 | 00:24 |
| 2 | 1.109388 | 0.998921 | 0.618785 | 00:24 |
| 3 | 1.100695 | 1.002027 | 0.620363 | 00:24 |
| 4 | 1.104554 | 1.017453 | 0.612470 | 00:24 |
| 5 | 1.092685 | 1.005584 | 0.620363 | 00:24 |
| 6 | 1.085164 | 0.984015 | 0.620363 | 00:24 |
| 7 | 1.072577 | 0.985938 | 0.617995 | 00:24 |
| 8 | 1.073172 | 0.988001 | 0.621152 | 00:24 |
| 9 | 1.065924 | 0.962552 | 0.625099 | 00:24 |
| 10 | 1.058865 | 0.969567 | 0.625099 | 00:24 |
| 11 | 1.057944 | 0.975480 | 0.629045 | 00:24 |
| 12 | 1.054787 | 0.952880 | 0.636148 | 00:24 |
| 13 | 1.047648 | 0.970808 | 0.625888 | 00:24 |
| 14 | 1.040709 | 0.966669 | 0.626677 | 00:24 |
| 15 | 1.035862 | 0.964864 | 0.633781 | 00:24 |
| 16 | 1.033269 | 0.964029 | 0.629834 | 00:24 |
| 17 | 1.040861 | 0.959143 | 0.629045 | 00:24 |
| 18 | 1.036682 | 0.956637 | 0.630624 | 00:24 |
| 19 | 1.038073 | 0.958620 | 0.628256 | 00:24 |
Better model found at epoch 1 with accuracy value: 0.6250986456871033. Better model found at epoch 11 with accuracy value: 0.6290450096130371. Better model found at epoch 12 with accuracy value: 0.6361483931541443. Done. Fold 2 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.237650 | 1.520169 | 0.404104 | 00:24 |
| 1 | 2.025900 | 1.267882 | 0.492502 | 00:24 |
| 2 | 1.814039 | 1.156065 | 0.537490 | 00:24 |
| 3 | 1.631270 | 1.107417 | 0.573796 | 00:24 |
| 4 | 1.482392 | 1.072049 | 0.588003 | 00:24 |
| 5 | 1.387941 | 1.048244 | 0.587214 | 00:24 |
| 6 | 1.318254 | 1.035094 | 0.598264 | 00:24 |
| 7 | 1.281294 | 1.029503 | 0.606156 | 00:24 |
| 8 | 1.239734 | 1.025099 | 0.602999 | 00:24 |
| 9 | 1.226306 | 1.023594 | 0.602999 | 00:24 |
Better model found at epoch 0 with accuracy value: 0.40410417318344116. Better model found at epoch 1 with accuracy value: 0.49250197410583496. Better model found at epoch 2 with accuracy value: 0.5374901294708252. Better model found at epoch 3 with accuracy value: 0.5737963914871216. Better model found at epoch 4 with accuracy value: 0.5880031585693359. Better model found at epoch 6 with accuracy value: 0.5982636213302612. Better model found at epoch 7 with accuracy value: 0.6061562895774841.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.233513 | 1.019410 | 0.605367 | 00:24 |
| 1 | 1.212643 | 1.025844 | 0.599053 | 00:24 |
| 2 | 1.189801 | 1.018473 | 0.617995 | 00:24 |
| 3 | 1.164950 | 1.003453 | 0.615627 | 00:24 |
| 4 | 1.144682 | 0.995235 | 0.623520 | 00:24 |
| 5 | 1.121766 | 0.990279 | 0.622731 | 00:24 |
| 6 | 1.103820 | 0.985983 | 0.618785 | 00:24 |
| 7 | 1.102669 | 0.979592 | 0.621942 | 00:24 |
| 8 | 1.085981 | 0.980048 | 0.615627 | 00:24 |
| 9 | 1.081677 | 0.978023 | 0.621152 | 00:24 |
Better model found at epoch 2 with accuracy value: 0.6179952621459961. Better model found at epoch 4 with accuracy value: 0.6235201358795166.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.113441 | 0.992174 | 0.622731 | 00:24 |
| 1 | 1.101354 | 0.991977 | 0.621942 | 00:24 |
| 2 | 1.103836 | 0.996055 | 0.619574 | 00:24 |
| 3 | 1.102502 | 0.992902 | 0.619574 | 00:24 |
| 4 | 1.099338 | 0.993709 | 0.621152 | 00:24 |
| 5 | 1.092476 | 0.989530 | 0.625888 | 00:24 |
| 6 | 1.086609 | 0.978957 | 0.622731 | 00:24 |
| 7 | 1.080230 | 0.986261 | 0.622731 | 00:24 |
| 8 | 1.077052 | 0.981445 | 0.627466 | 00:24 |
| 9 | 1.070122 | 0.968279 | 0.626677 | 00:24 |
| 10 | 1.058643 | 0.965988 | 0.631413 | 00:24 |
| 11 | 1.052799 | 0.965490 | 0.630624 | 00:24 |
| 12 | 1.049664 | 0.965171 | 0.630624 | 00:24 |
| 13 | 1.044741 | 0.961923 | 0.630624 | 00:24 |
| 14 | 1.039801 | 0.960764 | 0.631413 | 00:24 |
| 15 | 1.029003 | 0.960531 | 0.633781 | 00:24 |
| 16 | 1.033105 | 0.957965 | 0.634570 | 00:24 |
| 17 | 1.031923 | 0.956929 | 0.632202 | 00:24 |
| 18 | 1.029531 | 0.956849 | 0.633781 | 00:24 |
| 19 | 1.034204 | 0.958332 | 0.632202 | 00:24 |
Better model found at epoch 5 with accuracy value: 0.625887930393219. Better model found at epoch 8 with accuracy value: 0.6274664402008057. Better model found at epoch 10 with accuracy value: 0.6314128041267395. Better model found at epoch 15 with accuracy value: 0.6337805986404419. Better model found at epoch 16 with accuracy value: 0.6345698237419128. Done. Fold 3 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.213700 | 1.437796 | 0.429361 | 00:24 |
| 1 | 2.004562 | 1.177346 | 0.531965 | 00:24 |
| 2 | 1.797008 | 1.150558 | 0.558800 | 00:24 |
| 3 | 1.631078 | 1.094148 | 0.577743 | 00:24 |
| 4 | 1.498086 | 1.040842 | 0.591160 | 00:24 |
| 5 | 1.396913 | 1.050512 | 0.587214 | 00:24 |
| 6 | 1.316317 | 1.006949 | 0.610103 | 00:24 |
| 7 | 1.280120 | 0.998892 | 0.612470 | 00:24 |
| 8 | 1.249417 | 1.000149 | 0.615627 | 00:24 |
| 9 | 1.237198 | 0.999567 | 0.617995 | 00:24 |
Better model found at epoch 0 with accuracy value: 0.42936068773269653. Better model found at epoch 1 with accuracy value: 0.5319652557373047. Better model found at epoch 2 with accuracy value: 0.5588003396987915. Better model found at epoch 3 with accuracy value: 0.5777426958084106. Better model found at epoch 4 with accuracy value: 0.591160237789154. Better model found at epoch 6 with accuracy value: 0.6101025938987732. Better model found at epoch 7 with accuracy value: 0.6124703884124756. Better model found at epoch 8 with accuracy value: 0.6156274676322937. Better model found at epoch 9 with accuracy value: 0.6179952621459961.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.220938 | 0.995555 | 0.613260 | 00:24 |
| 1 | 1.201253 | 0.990843 | 0.610892 | 00:24 |
| 2 | 1.190812 | 0.996631 | 0.614838 | 00:24 |
| 3 | 1.161362 | 0.991885 | 0.614838 | 00:24 |
| 4 | 1.151440 | 0.972386 | 0.620363 | 00:24 |
| 5 | 1.131907 | 0.972573 | 0.610892 | 00:24 |
| 6 | 1.107726 | 0.965468 | 0.624309 | 00:24 |
| 7 | 1.100827 | 0.962169 | 0.620363 | 00:24 |
| 8 | 1.093484 | 0.958866 | 0.620363 | 00:24 |
| 9 | 1.089020 | 0.963642 | 0.620363 | 00:24 |
Better model found at epoch 4 with accuracy value: 0.6203630566596985. Better model found at epoch 6 with accuracy value: 0.6243094205856323.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.100032 | 0.967131 | 0.619574 | 00:24 |
| 1 | 1.095684 | 0.959103 | 0.619574 | 00:24 |
| 2 | 1.087572 | 0.958417 | 0.615627 | 00:24 |
| 3 | 1.084826 | 0.948159 | 0.621152 | 00:24 |
| 4 | 1.082677 | 0.986999 | 0.621942 | 00:24 |
| 5 | 1.081553 | 0.960375 | 0.629045 | 00:24 |
| 6 | 1.083204 | 0.966704 | 0.627466 | 00:24 |
| 7 | 1.079093 | 0.955179 | 0.625888 | 00:24 |
| 8 | 1.066524 | 0.974708 | 0.621942 | 00:24 |
| 9 | 1.063889 | 0.960181 | 0.623520 | 00:24 |
| 10 | 1.062312 | 0.946377 | 0.629045 | 00:24 |
| 11 | 1.054692 | 0.940723 | 0.632991 | 00:24 |
| 12 | 1.057748 | 0.941391 | 0.633781 | 00:24 |
| 13 | 1.038468 | 0.942069 | 0.630624 | 00:24 |
| 14 | 1.033975 | 0.934788 | 0.636148 | 00:24 |
| 15 | 1.037402 | 0.928791 | 0.633781 | 00:24 |
| 16 | 1.030657 | 0.932942 | 0.635359 | 00:24 |
| 17 | 1.028332 | 0.930068 | 0.632991 | 00:24 |
| 18 | 1.026874 | 0.930071 | 0.633781 | 00:24 |
| 19 | 1.020573 | 0.931080 | 0.633781 | 00:24 |
Better model found at epoch 5 with accuracy value: 0.6290450096130371. Better model found at epoch 11 with accuracy value: 0.6329913139343262. Better model found at epoch 12 with accuracy value: 0.6337805986404419. Better model found at epoch 14 with accuracy value: 0.6361483931541443. Done. Fold 4 Training...
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 2.188878 | 1.565658 | 0.391785 | 00:24 |
| 1 | 1.970468 | 1.219425 | 0.501580 | 00:24 |
| 2 | 1.799959 | 1.150831 | 0.544234 | 00:24 |
| 3 | 1.625025 | 1.084483 | 0.574250 | 00:24 |
| 4 | 1.489772 | 1.044948 | 0.617694 | 00:24 |
| 5 | 1.394725 | 1.025318 | 0.607425 | 00:24 |
| 6 | 1.322852 | 1.015245 | 0.614534 | 00:24 |
| 7 | 1.281021 | 1.004739 | 0.616904 | 00:24 |
| 8 | 1.250129 | 1.002400 | 0.622433 | 00:24 |
| 9 | 1.239141 | 1.003117 | 0.622433 | 00:24 |
Better model found at epoch 0 with accuracy value: 0.3917851448059082. Better model found at epoch 1 with accuracy value: 0.501579761505127. Better model found at epoch 2 with accuracy value: 0.5442337989807129. Better model found at epoch 3 with accuracy value: 0.5742496252059937. Better model found at epoch 4 with accuracy value: 0.6176935434341431. Better model found at epoch 8 with accuracy value: 0.6224328875541687.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.204851 | 1.004199 | 0.624803 | 00:24 |
| 1 | 1.200470 | 0.998745 | 0.616114 | 00:24 |
| 2 | 1.193636 | 0.983548 | 0.624013 | 00:24 |
| 3 | 1.169760 | 0.982016 | 0.625592 | 00:24 |
| 4 | 1.142466 | 0.956472 | 0.633491 | 00:24 |
| 5 | 1.129230 | 0.952697 | 0.632701 | 00:24 |
| 6 | 1.115902 | 0.952511 | 0.635861 | 00:24 |
| 7 | 1.106358 | 0.950358 | 0.636651 | 00:24 |
| 8 | 1.096154 | 0.950773 | 0.638231 | 00:24 |
| 9 | 1.091452 | 0.949475 | 0.635071 | 00:24 |
Better model found at epoch 0 with accuracy value: 0.6248025298118591. Better model found at epoch 3 with accuracy value: 0.6255924105644226. Better model found at epoch 4 with accuracy value: 0.6334913372993469. Better model found at epoch 6 with accuracy value: 0.6358609795570374. Better model found at epoch 7 with accuracy value: 0.6366508603096008. Better model found at epoch 8 with accuracy value: 0.6382306218147278.
| epoch | train_loss | valid_loss | accuracy | time |
|---|---|---|---|---|
| 0 | 1.094858 | 0.946670 | 0.639021 | 00:24 |
| 1 | 1.084634 | 0.946020 | 0.637441 | 00:24 |
| 2 | 1.084427 | 0.939012 | 0.635071 | 00:24 |
| 3 | 1.084201 | 0.941586 | 0.636651 | 00:24 |
| 4 | 1.084661 | 0.949933 | 0.635861 | 00:24 |
| 5 | 1.082939 | 0.972396 | 0.635071 | 00:24 |
| 6 | 1.080598 | 0.937131 | 0.636651 | 00:24 |
| 7 | 1.071371 | 0.932569 | 0.639810 | 00:24 |
| 8 | 1.067934 | 0.922281 | 0.640600 | 00:24 |
| 9 | 1.061146 | 0.941030 | 0.636651 | 00:24 |
| 10 | 1.062546 | 0.930829 | 0.637441 | 00:24 |
| 11 | 1.050749 | 0.907765 | 0.645340 | 00:24 |
| 12 | 1.048700 | 0.903729 | 0.643760 | 00:24 |
| 13 | 1.050432 | 0.915146 | 0.642180 | 00:24 |
| 14 | 1.044051 | 0.910577 | 0.643760 | 00:24 |
| 15 | 1.038224 | 0.912615 | 0.642970 | 00:24 |
| 16 | 1.040770 | 0.905791 | 0.650079 | 00:24 |
| 17 | 1.041600 | 0.907160 | 0.649289 | 00:24 |
| 18 | 1.038198 | 0.909241 | 0.646130 | 00:24 |
| 19 | 1.034247 | 0.908286 | 0.646919 | 00:24 |
Better model found at epoch 0 with accuracy value: 0.639020562171936. Better model found at epoch 7 with accuracy value: 0.6398104429244995. Better model found at epoch 8 with accuracy value: 0.640600323677063. Better model found at epoch 11 with accuracy value: 0.6453396677970886. Better model found at epoch 16 with accuracy value: 0.6500790119171143. Done.
Adding these models to the ensemble give as score of .350 on the public LB. Now let's go to the next part of the project - the image level.
The second task of the competion is to determine opacities in the lungs. Threre are many nets architecture for object detection. Here we will use different flavors of Yolov5.
Yolov5 is very popular object detection architecture. Altough not needed here, it's very fast and appropiate to real-time system. The official Yolov5 impelemntaion is very friendly - very convient to train and to use, and their github repo has detailed and comprehensive tutorials. Yolov5 has also a built-in integration with wandb, and creates helpful graphs and visualisations during the training process.
As first, we have to clone Yolov5 repo and install its dependecies.
%cd /content/gdrive/MyDrive/covid19-detection/
!git clone https://github.com/ultralytics/yolov5 # clone repo
%cd yolov5
%pip install -qr requirements.txt # install dependencies
%pip install albumentations --upgrade > /dev/null
/content/gdrive/MyDrive/covid19-detection
fatal: destination path 'yolov5' already exists and is not an empty directory.
/content/gdrive/MyDrive/covid19-detection/yolov5
|████████████████████████████████| 636 kB 5.0 MB/s
In order to be able to track the training with wandb we have to install wandb too.
!pip install wandb 1> /dev/null
Next, we have to convert the dataset to Yolov5 format.
For first, we will copy the images to another place, under the path DS-ROOT-PATH/images/train.
Then, in the same root, under the path DS-ROOT-PATH/labels/train we will create a new labels directory. In this dircetory we will create label file for each image. This file will contain list of labels (class and bounding box for each object in the image) for the coresponding image.
Yolo's the bounding boxes format is different from their format in the dataset. In the dataset the x and y of the boxes refer to the topleft box corner. In yolo format, they refer to the the box center. So we have to convert the bounding boxes to yolo format before writing them to the label files.
!mkdir /content/jpeg-256/images
!cp -r /content/jpeg-256/train /content/jpeg-256/images
from ast import literal_eval
labels_dir = Path('/content/jpeg-256/labels/train/')
labels_dir.mkdir(exist_ok=True, parents=True)
def box2yolo(shape, box):
W, H = shape
x = box['x']
y = box['y']
w = box['width']
h = box['height']
x /= W
w /= W
y /= H
h /= H
x = x + w/2
y = y + h/2
return x, y, w, h
for row in train_df.itertuples():
if isinstance(row.boxes, float):
yolo_boxes = ''
else:
boxes = literal_eval(row.boxes)
H = row.Rows
W = row.Columns
yolo_boxes = [ (0,) + box2yolo((W, H), box) for box in boxes]
yolo_boxes = [' '.join(str(e) for e in o) for o in yolo_boxes]
yolo_boxes = '\n'.join(yolo_boxes)
with open(labels_dir/(row.image_id + '.txt'), 'w+') as fp:
fp.write(yolo_boxes)
Now we have to create train.txt and va.txt which contains the list of the images in each dataset.
working_path = Path('/content/yolo-ds')
working_path.mkdir(exist_ok=True)
train_txt = '\n'.join(f'/content/jpeg-256/images/train/{f}' for f in train_df[~train_df.valid].image_fn.values)
val_txt = '\n'.join(f'/content/jpeg-256/images/train/{f}' for f in train_df[train_df.valid].image_fn.values)
with open(working_path/'train.txt', 'w+') as fp:
fp.write(train_txt)
with open(working_path/'val.txt', 'w+') as fp:
fp.write(val_txt)
!head $working_path/train.txt
/content/jpeg-256/images/train/000a312787f2.jpg /content/jpeg-256/images/train/0012ff7358bc.jpg /content/jpeg-256/images/train/001398f4ff4f.jpg /content/jpeg-256/images/train/001bd15d1891.jpg /content/jpeg-256/images/train/0022227f5adf.jpg /content/jpeg-256/images/train/0023f02ae886.jpg /content/jpeg-256/images/train/002e9b2128d0.jpg /content/jpeg-256/images/train/0044e449aae1.jpg /content/jpeg-256/images/train/0049814626c8.jpg /content/jpeg-256/images/train/004cbd797cd1.jpg
Now we have to create yaml file which describe the metadata of the dataset. All we need here is the classes list (here we have only one class- opcaity) and the paths to train.txt and valid.txt we just created.
import yaml
config = dict(
train=str(working_path/'train.txt'),
val=str(working_path/'val.txt'),
nc=1,
names=['0. opacity']
)
with open(working_path/'config.yaml', 'w+') as fp:
yaml.dump(config, fp)
!cat $working_path/'config.yaml'
names: - 0. opacity nc: 1 train: /content/yolo-ds/train.txt val: /content/yolo-ds/val.txt
!python train.py --img 256 \
--batch 256 \
--epochs 500 \
--data $working_path/config.yaml \
--weights yolov5s.pt \
--hyp data/hyps/hyp.finetune.yaml \
--name project-yolov5s-500-epochos
train: weights=yolov5s.pt, cfg=, data=/content/yolo-ds/config.yaml, hyp=data/hyps/hyp.finetune.yaml, epochs=500, batch_size=256, imgsz=256, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, entity=None, name=project-yolov5s-500-epochos, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0, patience=30 github: Command 'git fetch && git config --get remote.origin.url' timed out after 5 seconds remote: Enumerating objects: 234, done. remote: Counting objects: 100% (131/131), done. remote: Compressing objects: 100% (27/27), done. YOLOv5 🚀 2021-8-28 torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) remote: Total 234 (delta 110), reused 120 (delta 104), pack-reused 103 Receiving objects: 100% (234/234), 136.07 KiB | 1.02 MiB/s, done. hyperparameters: lr0=0.0032, lrf=0.12, momentum=0.843, weight_decay=0.00036, warmup_epochs=2.0, warmup_momentum=0.5, warmup_bias_lr=0.05, box=0.0296, cls=0.243, cls_pw=0.631, obj=0.301, obj_pw=0.911, iou_t=0.2, anchor_t=2.91, fl_gamma=0.0, hsv_h=0.0138, hsv_s=0.664, hsv_v=0.464, degrees=0.373, translate=0.245, scale=0.898, shear=0.602, perspective=0.0, flipud=0.00856, fliplr=0.5, mosaic=1.0, mixup=0.243, copy_paste=0.0 TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ Resolving deltas: 100% (147/147), completed with 30 local objects. From https://github.com/ultralytics/yolov5 d7aa3f1..ba0f808 master -> origin/master * [new branch] fix/arial2 -> origin/fix/arial2 0006d34..753138c new_data_set_loaders -> origin/new_data_set_loaders 13ffd35..5dbd2eb update/tf_export -> origin/update/tf_export wandb: (1) Create a W&B account wandb: (2) Use an existing W&B account wandb: (3) Don't visualize my results wandb: Enter your choice: 2 wandb: You chose 'Use an existing W&B account' wandb: You can find your API key in your browser here: https://wandb.ai/authorize wandb: Paste an API key from your profile and hit enter: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc wandb: Tracking run with wandb version 0.12.1 wandb: Syncing run project-yolov5s-500-epochos wandb: ⭐️ View project at https://wandb.ai/itamardvir/YOLOv5 wandb: 🚀 View run at https://wandb.ai/itamardvir/YOLOv5/runs/2awj08jw wandb: Run data is saved locally in /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210831_171924-2awj08jw wandb: Run `wandb offline` to turn off syncing. Overriding model.yaml nc=80 with nc=1 from n params module arguments 0 -1 1 3520 models.common.Focus [3, 32, 3] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] 4 -1 3 156928 models.common.C3 [128, 128, 3] 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] 6 -1 3 625152 models.common.C3 [256, 256, 3] 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]] 9 -1 1 1182720 models.common.C3 [512, 512, 1, False] 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 361984 models.common.C3 [512, 256, 1, False] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 90880 models.common.C3 [256, 128, 1, False] 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 296448 models.common.C3 [256, 256, 1, False] 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] 24 [17, 20, 23] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]] Model Summary: 283 layers, 7063542 parameters, 7063542 gradients, 16.4 GFLOPs Transferred 356/362 items from yolov5s.pt Scaled weight_decay = 0.00144 optimizer: SGD with parameter groups 59 weight, 62 weight (no decay), 62 bias albumentations: Blur(always_apply=False, p=0.1, blur_limit=(3, 7)), MedianBlur(always_apply=False, p=0.1, blur_limit=(3, 7)), ToGray(always_apply=False, p=0.01) train: Scanning '/content/yolo-ds/train' images and labels...4973 found, 0 missing, 1565 empty, 0 corrupted: 100% 4973/4973 [00:01<00:00, 4246.29it/s] train: New cache created: /content/yolo-ds/train.cache val: Scanning '/content/yolo-ds/val' images and labels...1361 found, 0 missing, 475 empty, 0 corrupted: 100% 1361/1361 [00:00<00:00, 1974.90it/s] val: New cache created: /content/yolo-ds/val.cache [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) Plotting labels... autoanchor: Analyzing anchors... anchors/target = 4.36, Best Possible Recall (BPR) = 0.9998 Image sizes 256 train, 256 val Using 4 dataloader workers Logging results to runs/train/project-yolov5s-500-epochos Starting training for 500 epochs... Epoch gpu_mem box obj cls labels img_size 0/499 7.93G 0.06809 0.003324 0 382 256: 100% 20/20 [00:17<00:00, 1.17it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.10s/it] all 1361 1617 0.0102 0.0421 0.0033 0.000523 Epoch gpu_mem box obj cls labels img_size 1/499 9.29G 0.06382 0.003752 0 304 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.85s/it] all 1361 1617 0.00978 0.0581 0.00374 0.000614 Epoch gpu_mem box obj cls labels img_size 2/499 9.29G 0.06106 0.004072 0 307 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.96s/it] all 1361 1617 0.013 0.0507 0.0045 0.000737 Epoch gpu_mem box obj cls labels img_size 3/499 9.29G 0.05881 0.004336 0 314 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.0126 0.0853 0.00488 0.000831 Epoch gpu_mem box obj cls labels img_size 4/499 9.29G 0.0566 0.004516 0 335 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.90s/it] all 1361 1617 0.0185 0.0643 0.00657 0.00115 Epoch gpu_mem box obj cls labels img_size 5/499 9.29G 0.05479 0.004778 0 317 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.98s/it] all 1361 1617 0.0196 0.068 0.00719 0.00125 Epoch gpu_mem box obj cls labels img_size 6/499 9.29G 0.05258 0.004952 0 299 256: 100% 20/20 [00:14<00:00, 1.34it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.12s/it] all 1361 1617 0.0248 0.0878 0.00904 0.00154 Epoch gpu_mem box obj cls labels img_size 7/499 9.29G 0.05075 0.005162 0 306 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.03s/it] all 1361 1617 0.0285 0.0804 0.0106 0.00186 Epoch gpu_mem box obj cls labels img_size 8/499 9.29G 0.04893 0.005189 0 272 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.95s/it] all 1361 1617 0.0333 0.148 0.0137 0.00248 Epoch gpu_mem box obj cls labels img_size 9/499 9.29G 0.04735 0.005394 0 312 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.90s/it] all 1361 1617 0.0408 0.198 0.0191 0.00363 Epoch gpu_mem box obj cls labels img_size 10/499 9.29G 0.04566 0.005486 0 286 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.01s/it] all 1361 1617 0.075 0.205 0.0358 0.00713 Epoch gpu_mem box obj cls labels img_size 11/499 9.29G 0.04428 0.005537 0 350 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.86s/it] all 1361 1617 0.119 0.254 0.0648 0.0136 Epoch gpu_mem box obj cls labels img_size 12/499 9.29G 0.04303 0.005532 0 284 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.02s/it] all 1361 1617 0.134 0.307 0.0824 0.0181 Epoch gpu_mem box obj cls labels img_size 13/499 9.29G 0.04203 0.005501 0 335 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.96s/it] all 1361 1617 0.175 0.329 0.112 0.0239 Epoch gpu_mem box obj cls labels img_size 14/499 9.29G 0.04142 0.005352 0 281 256: 100% 20/20 [00:15<00:00, 1.27it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.87s/it] all 1361 1617 0.143 0.362 0.0975 0.0208 Epoch gpu_mem box obj cls labels img_size 15/499 9.29G 0.04055 0.005289 0 324 256: 100% 20/20 [00:15<00:00, 1.27it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.88s/it] all 1361 1617 0.148 0.395 0.097 0.0199 Epoch gpu_mem box obj cls labels img_size 16/499 9.29G 0.04025 0.005214 0 331 256: 100% 20/20 [00:14<00:00, 1.35it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.01s/it] all 1361 1617 0.139 0.395 0.0938 0.0185 Epoch gpu_mem box obj cls labels img_size 17/499 9.29G 0.0397 0.005156 0 363 256: 100% 20/20 [00:15<00:00, 1.28it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.77s/it] all 1361 1617 0.135 0.449 0.102 0.0209 Epoch gpu_mem box obj cls labels img_size 18/499 9.29G 0.03997 0.005092 0 330 256: 100% 20/20 [00:14<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.03s/it] all 1361 1617 0.142 0.431 0.103 0.0218 Epoch gpu_mem box obj cls labels img_size 19/499 9.29G 0.03904 0.005016 0 347 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.00s/it] all 1361 1617 0.163 0.474 0.13 0.0267 Epoch gpu_mem box obj cls labels img_size 20/499 9.29G 0.03887 0.004947 0 318 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.98s/it] all 1361 1617 0.187 0.439 0.141 0.0297 Epoch gpu_mem box obj cls labels img_size 21/499 9.29G 0.03883 0.004845 0 283 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.94s/it] all 1361 1617 0.209 0.425 0.156 0.0354 Epoch gpu_mem box obj cls labels img_size 22/499 9.29G 0.03848 0.004792 0 293 256: 100% 20/20 [00:15<00:00, 1.28it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.90s/it] all 1361 1617 0.218 0.429 0.171 0.0379 Epoch gpu_mem box obj cls labels img_size 23/499 9.29G 0.0383 0.00486 0 329 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.90s/it] all 1361 1617 0.223 0.365 0.158 0.0314 Epoch gpu_mem box obj cls labels img_size 24/499 9.29G 0.03798 0.004806 0 322 256: 100% 20/20 [00:14<00:00, 1.34it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.279 0.347 0.193 0.0437 Epoch gpu_mem box obj cls labels img_size 25/499 9.29G 0.03792 0.004752 0 269 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.95s/it] all 1361 1617 0.306 0.39 0.228 0.053 Epoch gpu_mem box obj cls labels img_size 26/499 9.29G 0.03785 0.004794 0 337 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.95s/it] all 1361 1617 0.322 0.397 0.255 0.064 Epoch gpu_mem box obj cls labels img_size 27/499 9.29G 0.03767 0.004711 0 312 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.94s/it] all 1361 1617 0.295 0.425 0.249 0.0607 Epoch gpu_mem box obj cls labels img_size 28/499 9.29G 0.03752 0.004721 0 350 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.02s/it] all 1361 1617 0.306 0.393 0.25 0.064 Epoch gpu_mem box obj cls labels img_size 29/499 9.29G 0.03716 0.004625 0 324 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.78s/it] all 1361 1617 0.31 0.42 0.262 0.0656 Epoch gpu_mem box obj cls labels img_size 30/499 9.29G 0.03717 0.004573 0 294 256: 100% 20/20 [00:16<00:00, 1.25it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.69s/it] all 1361 1617 0.369 0.367 0.27 0.0671 Epoch gpu_mem box obj cls labels img_size 31/499 9.29G 0.03669 0.004611 0 267 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.79s/it] all 1361 1617 0.331 0.411 0.278 0.0678 Epoch gpu_mem box obj cls labels img_size 32/499 9.29G 0.03702 0.004573 0 287 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.88s/it] all 1361 1617 0.361 0.411 0.293 0.0719 Epoch gpu_mem box obj cls labels img_size 33/499 9.29G 0.03673 0.004624 0 318 256: 100% 20/20 [00:15<00:00, 1.28it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.77s/it] all 1361 1617 0.378 0.392 0.296 0.0765 Epoch gpu_mem box obj cls labels img_size 34/499 9.29G 0.03658 0.004577 0 323 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.89s/it] all 1361 1617 0.366 0.408 0.296 0.0748 Epoch gpu_mem box obj cls labels img_size 35/499 9.29G 0.03628 0.00445 0 326 256: 100% 20/20 [00:15<00:00, 1.27it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.69s/it] all 1361 1617 0.415 0.413 0.33 0.0848 Epoch gpu_mem box obj cls labels img_size 36/499 9.29G 0.03634 0.004483 0 297 256: 100% 20/20 [00:15<00:00, 1.28it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.81s/it] all 1361 1617 0.364 0.476 0.329 0.0896 Epoch gpu_mem box obj cls labels img_size 37/499 9.29G 0.03643 0.004508 0 341 256: 100% 20/20 [00:15<00:00, 1.25it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.70s/it] all 1361 1617 0.437 0.402 0.326 0.0918 Epoch gpu_mem box obj cls labels img_size 38/499 9.29G 0.03602 0.004411 0 286 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.87s/it] all 1361 1617 0.441 0.448 0.363 0.0983 Epoch gpu_mem box obj cls labels img_size 39/499 9.29G 0.03601 0.004452 0 319 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.89s/it] all 1361 1617 0.432 0.429 0.351 0.0968 Epoch gpu_mem box obj cls labels img_size 40/499 9.29G 0.03583 0.004407 0 307 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.74s/it] all 1361 1617 0.42 0.46 0.369 0.102 Epoch gpu_mem box obj cls labels img_size 41/499 9.29G 0.03587 0.004487 0 351 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.83s/it] all 1361 1617 0.444 0.458 0.381 0.101 Epoch gpu_mem box obj cls labels img_size 42/499 9.29G 0.03553 0.004327 0 280 256: 100% 20/20 [00:15<00:00, 1.27it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.80s/it] all 1361 1617 0.467 0.445 0.381 0.103 Epoch gpu_mem box obj cls labels img_size 43/499 9.29G 0.03565 0.004352 0 313 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.86s/it] all 1361 1617 0.388 0.401 0.298 0.0796 Epoch gpu_mem box obj cls labels img_size 44/499 9.29G 0.03542 0.004363 0 293 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.461 0.459 0.368 0.107 Epoch gpu_mem box obj cls labels img_size 45/499 9.29G 0.03483 0.004365 0 358 256: 100% 20/20 [00:15<00:00, 1.28it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.79s/it] all 1361 1617 0.495 0.43 0.382 0.106 Epoch gpu_mem box obj cls labels img_size 46/499 9.29G 0.03534 0.004281 0 297 256: 100% 20/20 [00:15<00:00, 1.27it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.68s/it] all 1361 1617 0.433 0.447 0.366 0.0987 Epoch gpu_mem box obj cls labels img_size 47/499 9.29G 0.03502 0.004393 0 324 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.79s/it] all 1361 1617 0.497 0.444 0.398 0.111 Epoch gpu_mem box obj cls labels img_size 48/499 9.29G 0.03492 0.004192 0 307 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.69s/it] all 1361 1617 0.449 0.401 0.335 0.0906 Epoch gpu_mem box obj cls labels img_size 49/499 9.29G 0.03494 0.004249 0 344 256: 100% 20/20 [00:15<00:00, 1.26it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.82s/it] all 1361 1617 0.495 0.456 0.397 0.113 Epoch gpu_mem box obj cls labels img_size 50/499 9.29G 0.03457 0.00434 0 344 256: 100% 20/20 [00:15<00:00, 1.27it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.73s/it] all 1361 1617 0.458 0.456 0.377 0.0996 Epoch gpu_mem box obj cls labels img_size 51/499 9.29G 0.03439 0.004299 0 370 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.78s/it] all 1361 1617 0.431 0.42 0.329 0.0877 Epoch gpu_mem box obj cls labels img_size 52/499 9.29G 0.03444 0.004227 0 342 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.95s/it] all 1361 1617 0.445 0.497 0.392 0.113 Epoch gpu_mem box obj cls labels img_size 53/499 9.29G 0.03431 0.004315 0 344 256: 100% 20/20 [00:15<00:00, 1.28it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.505 0.447 0.394 0.118 Epoch gpu_mem box obj cls labels img_size 54/499 9.29G 0.03432 0.004299 0 290 256: 100% 20/20 [00:15<00:00, 1.25it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.85s/it] all 1361 1617 0.493 0.439 0.37 0.104 Epoch gpu_mem box obj cls labels img_size 55/499 9.29G 0.03417 0.004147 0 274 256: 100% 20/20 [00:16<00:00, 1.22it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:07<00:00, 2.62s/it] all 1361 1617 0.547 0.433 0.411 0.121 Epoch gpu_mem box obj cls labels img_size 56/499 9.29G 0.03417 0.004241 0 321 256: 100% 20/20 [00:15<00:00, 1.26it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.76s/it] all 1361 1617 0.474 0.473 0.398 0.112 Epoch gpu_mem box obj cls labels img_size 57/499 9.29G 0.034 0.004197 0 301 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.89s/it] all 1361 1617 0.456 0.455 0.373 0.107 Epoch gpu_mem box obj cls labels img_size 58/499 9.29G 0.03398 0.004185 0 300 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.523 0.422 0.405 0.116 Epoch gpu_mem box obj cls labels img_size 59/499 9.29G 0.03384 0.004231 0 334 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.489 0.468 0.392 0.117 Epoch gpu_mem box obj cls labels img_size 60/499 9.29G 0.03422 0.004164 0 328 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.94s/it] all 1361 1617 0.423 0.434 0.338 0.0898 Epoch gpu_mem box obj cls labels img_size 61/499 9.29G 0.03384 0.004162 0 302 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.86s/it] all 1361 1617 0.51 0.46 0.407 0.119 Epoch gpu_mem box obj cls labels img_size 62/499 9.29G 0.03386 0.004146 0 297 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.84s/it] all 1361 1617 0.516 0.45 0.411 0.119 Epoch gpu_mem box obj cls labels img_size 63/499 9.29G 0.03397 0.004218 0 345 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.81s/it] all 1361 1617 0.472 0.475 0.398 0.12 Epoch gpu_mem box obj cls labels img_size 64/499 9.29G 0.03368 0.004179 0 326 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.79s/it] all 1361 1617 0.452 0.412 0.346 0.0997 Epoch gpu_mem box obj cls labels img_size 65/499 9.29G 0.03363 0.004229 0 333 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.78s/it] all 1361 1617 0.533 0.463 0.403 0.12 Epoch gpu_mem box obj cls labels img_size 66/499 9.29G 0.03358 0.004142 0 383 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.79s/it] all 1361 1617 0.468 0.499 0.409 0.125 Epoch gpu_mem box obj cls labels img_size 67/499 9.29G 0.03358 0.004116 0 338 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.70s/it] all 1361 1617 0.564 0.452 0.436 0.125 Epoch gpu_mem box obj cls labels img_size 68/499 9.29G 0.03345 0.00412 0 339 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.89s/it] all 1361 1617 0.555 0.439 0.417 0.126 Epoch gpu_mem box obj cls labels img_size 69/499 9.29G 0.03357 0.00404 0 278 256: 100% 20/20 [00:14<00:00, 1.34it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.85s/it] all 1361 1617 0.518 0.489 0.429 0.128 Epoch gpu_mem box obj cls labels img_size 70/499 9.29G 0.03339 0.004117 0 262 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.82s/it] all 1361 1617 0.54 0.47 0.434 0.128 Epoch gpu_mem box obj cls labels img_size 71/499 9.29G 0.03333 0.004225 0 380 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.81s/it] all 1361 1617 0.466 0.484 0.407 0.117 Epoch gpu_mem box obj cls labels img_size 72/499 9.29G 0.03328 0.004184 0 328 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.95s/it] all 1361 1617 0.493 0.497 0.435 0.126 Epoch gpu_mem box obj cls labels img_size 73/499 9.29G 0.03312 0.004144 0 319 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.85s/it] all 1361 1617 0.526 0.451 0.428 0.126 Epoch gpu_mem box obj cls labels img_size 74/499 9.29G 0.03331 0.004075 0 302 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.90s/it] all 1361 1617 0.523 0.484 0.438 0.13 Epoch gpu_mem box obj cls labels img_size 75/499 9.29G 0.03328 0.00418 0 335 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.89s/it] all 1361 1617 0.567 0.457 0.434 0.128 Epoch gpu_mem box obj cls labels img_size 76/499 9.29G 0.03311 0.004107 0 328 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.89s/it] all 1361 1617 0.526 0.484 0.443 0.133 Epoch gpu_mem box obj cls labels img_size 77/499 9.29G 0.03318 0.004178 0 325 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.82s/it] all 1361 1617 0.533 0.475 0.433 0.132 Epoch gpu_mem box obj cls labels img_size 78/499 9.29G 0.03298 0.004105 0 286 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.87s/it] all 1361 1617 0.513 0.474 0.427 0.13 Epoch gpu_mem box obj cls labels img_size 79/499 9.29G 0.03321 0.004058 0 299 256: 100% 20/20 [00:14<00:00, 1.36it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.95s/it] all 1361 1617 0.482 0.479 0.419 0.126 Epoch gpu_mem box obj cls labels img_size 80/499 9.29G 0.03323 0.004147 0 362 256: 100% 20/20 [00:14<00:00, 1.34it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.548 0.413 0.414 0.124 Epoch gpu_mem box obj cls labels img_size 81/499 9.29G 0.03306 0.004094 0 288 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.78s/it] all 1361 1617 0.497 0.422 0.39 0.12 Epoch gpu_mem box obj cls labels img_size 82/499 9.29G 0.03287 0.004037 0 326 256: 100% 20/20 [00:14<00:00, 1.34it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.489 0.502 0.434 0.13 Epoch gpu_mem box obj cls labels img_size 83/499 9.29G 0.03301 0.004119 0 299 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.79s/it] all 1361 1617 0.511 0.426 0.406 0.118 Epoch gpu_mem box obj cls labels img_size 84/499 9.29G 0.03323 0.004163 0 322 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.85s/it] all 1361 1617 0.564 0.457 0.436 0.127 Epoch gpu_mem box obj cls labels img_size 85/499 9.29G 0.03295 0.004181 0 367 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.92s/it] all 1361 1617 0.556 0.466 0.452 0.137 Epoch gpu_mem box obj cls labels img_size 86/499 9.29G 0.03305 0.004128 0 350 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.521 0.481 0.434 0.129 Epoch gpu_mem box obj cls labels img_size 87/499 9.29G 0.03312 0.004051 0 324 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.84s/it] all 1361 1617 0.46 0.506 0.426 0.129 Epoch gpu_mem box obj cls labels img_size 88/499 9.29G 0.03295 0.004077 0 327 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.78s/it] all 1361 1617 0.546 0.485 0.438 0.133 Epoch gpu_mem box obj cls labels img_size 89/499 9.29G 0.03303 0.004067 0 314 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.90s/it] all 1361 1617 0.497 0.503 0.447 0.134 Epoch gpu_mem box obj cls labels img_size 90/499 9.29G 0.03298 0.004105 0 341 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.90s/it] all 1361 1617 0.538 0.476 0.447 0.138 Epoch gpu_mem box obj cls labels img_size 91/499 9.29G 0.03268 0.004069 0 305 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.80s/it] all 1361 1617 0.526 0.495 0.455 0.138 Epoch gpu_mem box obj cls labels img_size 92/499 9.29G 0.03284 0.004109 0 331 256: 100% 20/20 [00:15<00:00, 1.28it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.81s/it] all 1361 1617 0.471 0.502 0.426 0.128 Epoch gpu_mem box obj cls labels img_size 93/499 9.29G 0.0327 0.004005 0 293 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.76s/it] all 1361 1617 0.531 0.474 0.444 0.134 Epoch gpu_mem box obj cls labels img_size 94/499 9.29G 0.03277 0.004086 0 323 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.90s/it] all 1361 1617 0.537 0.494 0.457 0.141 Epoch gpu_mem box obj cls labels img_size 95/499 9.29G 0.0331 0.00412 0 308 256: 100% 20/20 [00:15<00:00, 1.25it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.67s/it] all 1361 1617 0.545 0.456 0.443 0.131 Epoch gpu_mem box obj cls labels img_size 96/499 9.29G 0.03273 0.004061 0 362 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.94s/it] all 1361 1617 0.504 0.471 0.43 0.127 Epoch gpu_mem box obj cls labels img_size 97/499 9.29G 0.03278 0.004157 0 301 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.92s/it] all 1361 1617 0.524 0.466 0.427 0.131 Epoch gpu_mem box obj cls labels img_size 98/499 9.29G 0.03247 0.004113 0 346 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.98s/it] all 1361 1617 0.51 0.467 0.43 0.132 Epoch gpu_mem box obj cls labels img_size 99/499 9.29G 0.03249 0.00408 0 326 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.86s/it] all 1361 1617 0.498 0.48 0.432 0.132 Epoch gpu_mem box obj cls labels img_size 100/499 9.29G 0.0325 0.003998 0 337 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.95s/it] all 1361 1617 0.544 0.488 0.455 0.139 Epoch gpu_mem box obj cls labels img_size 101/499 9.29G 0.03255 0.004067 0 273 256: 100% 20/20 [00:14<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.90s/it] all 1361 1617 0.512 0.466 0.422 0.128 Epoch gpu_mem box obj cls labels img_size 102/499 9.29G 0.03275 0.004135 0 281 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.78s/it] all 1361 1617 0.519 0.486 0.442 0.133 Epoch gpu_mem box obj cls labels img_size 103/499 9.29G 0.03268 0.004078 0 299 256: 100% 20/20 [00:14<00:00, 1.34it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:09<00:00, 3.03s/it] all 1361 1617 0.566 0.458 0.447 0.134 Epoch gpu_mem box obj cls labels img_size 104/499 9.29G 0.03272 0.00406 0 287 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.87s/it] all 1361 1617 0.504 0.469 0.408 0.12 Epoch gpu_mem box obj cls labels img_size 105/499 9.29G 0.03262 0.004098 0 318 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.86s/it] all 1361 1617 0.502 0.428 0.394 0.116 Epoch gpu_mem box obj cls labels img_size 106/499 9.29G 0.03249 0.004042 0 357 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.82s/it] all 1361 1617 0.534 0.482 0.447 0.135 Epoch gpu_mem box obj cls labels img_size 107/499 9.29G 0.03255 0.004016 0 311 256: 100% 20/20 [00:15<00:00, 1.27it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:07<00:00, 2.66s/it] all 1361 1617 0.474 0.502 0.437 0.131 Epoch gpu_mem box obj cls labels img_size 108/499 9.29G 0.03261 0.004117 0 312 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.83s/it] all 1361 1617 0.5 0.463 0.412 0.125 Epoch gpu_mem box obj cls labels img_size 109/499 9.29G 0.03249 0.003988 0 296 256: 100% 20/20 [00:14<00:00, 1.34it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.91s/it] all 1361 1617 0.509 0.455 0.435 0.126 Epoch gpu_mem box obj cls labels img_size 110/499 9.29G 0.03231 0.004044 0 288 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.78s/it] all 1361 1617 0.515 0.481 0.452 0.133 Epoch gpu_mem box obj cls labels img_size 111/499 9.29G 0.03224 0.004019 0 231 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.82s/it] all 1361 1617 0.47 0.484 0.421 0.13 Epoch gpu_mem box obj cls labels img_size 112/499 9.29G 0.03237 0.004037 0 324 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.82s/it] all 1361 1617 0.487 0.422 0.396 0.118 Epoch gpu_mem box obj cls labels img_size 113/499 9.29G 0.03245 0.004034 0 352 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.74s/it] all 1361 1617 0.485 0.436 0.4 0.116 Epoch gpu_mem box obj cls labels img_size 114/499 9.29G 0.03258 0.004092 0 304 256: 100% 20/20 [00:15<00:00, 1.30it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.74s/it] all 1361 1617 0.52 0.477 0.441 0.131 Epoch gpu_mem box obj cls labels img_size 115/499 9.29G 0.03243 0.00405 0 331 256: 100% 20/20 [00:14<00:00, 1.36it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.95s/it] all 1361 1617 0.537 0.463 0.434 0.133 Epoch gpu_mem box obj cls labels img_size 116/499 9.29G 0.0324 0.004137 0 359 256: 100% 20/20 [00:14<00:00, 1.35it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.83s/it] all 1361 1617 0.508 0.469 0.418 0.126 Epoch gpu_mem box obj cls labels img_size 117/499 9.29G 0.03226 0.004014 0 288 256: 100% 20/20 [00:14<00:00, 1.34it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.88s/it] all 1361 1617 0.487 0.489 0.425 0.128 Epoch gpu_mem box obj cls labels img_size 118/499 9.29G 0.03208 0.004042 0 313 256: 100% 20/20 [00:15<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.86s/it] all 1361 1617 0.52 0.464 0.418 0.126 Epoch gpu_mem box obj cls labels img_size 119/499 9.29G 0.03254 0.004108 0 352 256: 100% 20/20 [00:15<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.73s/it] all 1361 1617 0.552 0.466 0.45 0.134 Epoch gpu_mem box obj cls labels img_size 120/499 9.29G 0.03222 0.004037 0 290 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.78s/it] all 1361 1617 0.516 0.487 0.436 0.132 Epoch gpu_mem box obj cls labels img_size 121/499 9.29G 0.03248 0.004102 0 334 256: 100% 20/20 [00:15<00:00, 1.28it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.68s/it] all 1361 1617 0.503 0.484 0.434 0.126 Epoch gpu_mem box obj cls labels img_size 122/499 9.29G 0.03261 0.004106 0 350 256: 100% 20/20 [00:15<00:00, 1.32it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.86s/it] all 1361 1617 0.52 0.497 0.449 0.137 Epoch gpu_mem box obj cls labels img_size 123/499 9.29G 0.03237 0.004123 0 325 256: 100% 20/20 [00:14<00:00, 1.36it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.88s/it] all 1361 1617 0.527 0.477 0.448 0.137 Epoch gpu_mem box obj cls labels img_size 124/499 9.29G 0.03243 0.004007 0 284 256: 100% 20/20 [00:15<00:00, 1.33it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:08<00:00, 2.78s/it] all 1361 1617 0.518 0.476 0.447 0.132 EarlyStopping patience 30 exceeded, stopping training. 125 epochs completed in 0.861 hours. Optimizer stripped from runs/train/project-yolov5s-500-epochos/weights/last.pt, 14.3MB Optimizer stripped from runs/train/project-yolov5s-500-epochos/weights/best.pt, 14.3MB wandb: Waiting for W&B process to finish, PID 577 wandb: Program ended successfully. wandb: wandb: Find user logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210831_171924-2awj08jw/logs/debug.log wandb: Find internal logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210831_171924-2awj08jw/logs/debug-internal.log wandb: Run summary: wandb: train/box_loss 0.03243 wandb: train/obj_loss 0.00401 wandb: train/cls_loss 0.0 wandb: metrics/precision 0.51813 wandb: metrics/recall 0.47557 wandb: metrics/mAP_0.5 0.44735 wandb: metrics/mAP_0.5:0.95 0.13246 wandb: val/box_loss 0.03249 wandb: val/obj_loss 0.00185 wandb: val/cls_loss 0.0 wandb: x/lr0 0.0028 wandb: x/lr1 0.0028 wandb: x/lr2 0.0028 wandb: _runtime 3130 wandb: _timestamp 1630433494 wandb: _step 125 wandb: Run history: wandb: train/box_loss █▆▅▄▃▃▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: train/obj_loss ▁▄▆██▇▆▆▆▅▅▅▄▅▄▄▄▄▄▄▄▄▃▄▃▃▃▄▃▃▄▄▃▃▄▃▃▃▃▄ wandb: train/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: metrics/precision ▁▁▁▁▃▃▃▄▅▅▅▆▇▇▇▇▇▇▇▆▇▇███▇▇█▇▇█▇▇▇▇▇█▇▇█ wandb: metrics/recall ▁▂▂▃▅▆█▇▆▆▇▇▇▇▇▇▇▇▇▇███▇███▇██▇▇▇▇▇█████ wandb: metrics/mAP_0.5 ▁▁▁▁▂▂▃▄▅▅▅▆▇▇▇▇▇▇▇▆▇▇███▇███████▇▇█████ wandb: metrics/mAP_0.5:0.95 ▁▁▁▁▂▂▂▃▄▄▄▅▆▆▆▇▆▆▆▆▇▇█▇█▇███████▇▇█████ wandb: val/box_loss █▆▅▄▃▂▂▂▂▂▂▂▁▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/obj_loss ▃▆▇██▇▆▅▅▄▄▃▃▃▂▂▂▂▂▂▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: x/lr0 ▁▁▂▂▃▃▄▄▅▅▅▆▆▇▇███████████████████▇▇▇▇▇▇ wandb: x/lr1 ▁▁▂▂▃▃▄▄▅▅▅▆▆▇▇███████████████████▇▇▇▇▇▇ wandb: x/lr2 ██▇▇▆▆▅▅▄▄▄▃▃▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: _runtime ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇▇███ wandb: _timestamp ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇▇███ wandb: _step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ wandb: wandb: Synced 5 W&B file(s), 70 media file(s), 1 artifact file(s) and 0 other file(s) wandb: wandb: Synced project-yolov5s-500-epochos: https://wandb.ai/itamardvir/YOLOv5/runs/2awj08jw Results saved to runs/train/project-yolov5s-500-epochos
The model stopped by the early stopping aglorithm after 126 epcohs, with best mAP score of 0.45.
Yolov5 have many head architectures. Generaly, they all guve roughly the same mAP score. But for building an ensmeble we want the model to be differ from each other as much as we can, so now we would train five different yolo architechture, one for each of our 5 folds.
We have first to create train.txt and val.txt for the five folds:
working_path = Path('/content/yolo-ds')
working_path.mkdir(exist_ok=True)
for fold in range(nfolds):
train_txt = '\n'.join(f'/content/jpeg-256/images/train/{f}' for f in train_df[train_df.val_fold != fold].image_fn.values)
val_txt = '\n'.join(f'/content/jpeg-256/images/train/{f}' for f in train_df[train_df.val_fold == fold].image_fn.values)
with open(working_path/f'train_fold_{fold}.txt', 'w+') as fp:
fp.write(train_txt)
with open(working_path/f'val_fold_{fold}.txt', 'w+') as fp:
fp.write(val_txt)
import yaml
for fold in range(nfolds):
config = dict(
train=str(working_path/f'train_fold_{fold}.txt'),
val=str(working_path/f'val_fold_{fold}.txt'),
nc=1,
names=['0. opacity']
)
with open(working_path/f'config_fold_{fold}.yaml', 'w+') as fp:
yaml.dump(config, fp)
Now let's set the configuration of the 5-folds training. Based of former experimnets, will train yolov5s architecture for 400 epcohs and batch size of 256, yolov5m for 200 epochs with batch size of 128, yolov5l ,yolov5lx6 and yolo5vx for 120 epochs with batch size of 64.
run_config = {
0: {'model': 'yolov5s',
'batch-size' : 256,
'epochs': 400},
1: {'model': 'yolov5s',
'batch-size' : 256,
'epochs': 300},
2: {'model': 'yolov5m',
'batch-size' : 128,
'epochs': 200},
3: {'model': 'yolov5l',
'batch-size' : 64,
'epochs': 120},
4: {'model': 'yolov5x',
'batch-size' : 64,
'epochs': 120},
}
def train_yolo_fold(fold):
model, batch_size, epochs = run_config[fold]['model'], run_config[fold]['batch-size'], run_config[fold]['epochs']
!python train.py \
--img 256 \
--batch-size $batch_size \
--epochs $epochs \
--data $working_path/config_fold_{fold}.yaml \
--weights {model}.pt \
--hyp data/hyps/hyp.finetune.yaml \
--name project-$model-$batch_size-$epochs-fold-$fold
for fold in range(nfolds):
train_yolo_fold(fold)
train: weights=yolov5s.pt, cfg=, data=/content/yolo-ds/config_fold_0.yaml, hyp=data/hyps/hyp.finetune.yaml, epochs=400, batch_size=256, imgsz=256, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, entity=None, name=project-yolov5s-256-400-fold-0, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0, patience=30 remote: Enumerating objects: 4, done. remote: Counting objects: 100% (4/4), done. remote: Total 4 (delta 3), reused 4 (delta 3), pack-reused 0 Unpacking objects: 100% (4/4), done. From https://github.com/ultralytics/yolov5 7643b7d..56e003b feature/annotator -> origin/feature/annotator github: up to date with https://github.com/ultralytics/yolov5 ✅ YOLOv5 🚀 2021-8-28 torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) hyperparameters: lr0=0.0032, lrf=0.12, momentum=0.843, weight_decay=0.00036, warmup_epochs=2.0, warmup_momentum=0.5, warmup_bias_lr=0.05, box=0.0296, cls=0.243, cls_pw=0.631, obj=0.301, obj_pw=0.911, iou_t=0.2, anchor_t=2.91, fl_gamma=0.0, hsv_h=0.0138, hsv_s=0.664, hsv_v=0.464, degrees=0.373, translate=0.245, scale=0.898, shear=0.602, perspective=0.0, flipud=0.00856, fliplr=0.5, mosaic=1.0, mixup=0.243, copy_paste=0.0 TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ wandb: (1) Create a W&B account wandb: (2) Use an existing W&B account wandb: (3) Don't visualize my results wandb: Enter your choice: 2 wandb: You chose 'Use an existing W&B account' wandb: You can find your API key in your browser here: https://wandb.ai/authorize wandb: Paste an API key from your profile and hit enter: wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc wandb: Tracking run with wandb version 0.12.1 wandb: Syncing run project-yolov5s-256-400-fold-0 wandb: ⭐️ View project at https://wandb.ai/itamardvir/YOLOv5 wandb: 🚀 View run at https://wandb.ai/itamardvir/YOLOv5/runs/5882t4to wandb: Run data is saved locally in /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210828_231857-5882t4to wandb: Run `wandb offline` to turn off syncing. Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt to yolov5s.pt... 100% 14.1M/14.1M [00:00<00:00, 87.6MB/s] Overriding model.yaml nc=80 with nc=1 from n params module arguments 0 -1 1 3520 models.common.Focus [3, 32, 3] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] 4 -1 3 156928 models.common.C3 [128, 128, 3] 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] 6 -1 3 625152 models.common.C3 [256, 256, 3] 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]] 9 -1 1 1182720 models.common.C3 [512, 512, 1, False] 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 361984 models.common.C3 [512, 256, 1, False] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 90880 models.common.C3 [256, 128, 1, False] 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 296448 models.common.C3 [256, 256, 1, False] 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] 24 [17, 20, 23] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]] Model Summary: 283 layers, 7063542 parameters, 7063542 gradients, 16.4 GFLOPs Transferred 356/362 items from yolov5s.pt Scaled weight_decay = 0.00144 optimizer: SGD with parameter groups 59 weight, 62 weight (no decay), 62 bias albumentations: Blur(always_apply=False, p=0.1, blur_limit=(3, 7)), MedianBlur(always_apply=False, p=0.1, blur_limit=(3, 7)), ToGray(always_apply=False, p=0.01) train: Scanning '/content/yolo-ds/train_fold_0' images and labels...5067 found, 0 missing, 1623 empty, 0 corrupted: 100% 5067/5067 [00:00<00:00, 6190.11it/s] train: New cache created: /content/yolo-ds/train_fold_0.cache val: Scanning '/content/yolo-ds/val_fold_0' images and labels...1267 found, 0 missing, 417 empty, 0 corrupted: 100% 1267/1267 [00:00<00:00, 3023.89it/s] val: New cache created: /content/yolo-ds/val_fold_0.cache [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) Plotting labels... autoanchor: Analyzing anchors... anchors/target = 4.37, Best Possible Recall (BPR) = 0.9998 Image sizes 256 train, 256 val Using 4 dataloader workers Logging results to runs/train/project-yolov5s-256-400-fold-0 Starting training for 400 epochs... Epoch gpu_mem box obj cls labels img_size 0/399 7.93G 0.06803 0.003237 0 622 256: 100% 20/20 [00:14<00:00, 1.40it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:06<00:00, 2.03s/it] all 1267 1559 0.0117 0.0417 0.00391 0.0006 Epoch gpu_mem box obj cls labels img_size 1/399 9.29G 0.06365 0.003774 0 559 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.96s/it] all 1267 1559 0.0149 0.0475 0.00522 0.00112 Epoch gpu_mem box obj cls labels img_size 2/399 9.29G 0.0609 0.004054 0 577 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.94s/it] all 1267 1559 0.0132 0.068 0.00528 0.000866 Epoch gpu_mem box obj cls labels img_size 3/399 9.29G 0.05867 0.004286 0 548 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.91s/it] all 1267 1559 0.0119 0.139 0.00567 0.000946 Epoch gpu_mem box obj cls labels img_size 4/399 9.29G 0.05656 0.00446 0 573 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.98s/it] all 1267 1559 0.0256 0.0269 0.00637 0.00113 Epoch gpu_mem box obj cls labels img_size 5/399 9.29G 0.0544 0.004761 0 586 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:06<00:00, 2.00s/it] all 1267 1559 0.0309 0.0686 0.00978 0.00156 Epoch gpu_mem box obj cls labels img_size 6/399 9.29G 0.05231 0.00502 0 589 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.99s/it] all 1267 1559 0.0282 0.0686 0.00976 0.00165 Epoch gpu_mem box obj cls labels img_size 7/399 9.29G 0.05066 0.005093 0 622 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.95s/it] all 1267 1559 0.0225 0.112 0.0106 0.002 Epoch gpu_mem box obj cls labels img_size 8/399 9.29G 0.04886 0.005216 0 530 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:06<00:00, 2.02s/it] all 1267 1559 0.0405 0.21 0.0185 0.00347 Epoch gpu_mem box obj cls labels img_size 9/399 9.29G 0.04703 0.005434 0 490 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.93s/it] all 1267 1559 0.0585 0.217 0.0279 0.00524 Epoch gpu_mem box obj cls labels img_size 10/399 9.29G 0.04566 0.005497 0 576 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.0805 0.218 0.0374 0.00757 Epoch gpu_mem box obj cls labels img_size 11/399 9.29G 0.04434 0.005543 0 572 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.93s/it] all 1267 1559 0.117 0.266 0.0625 0.014 Epoch gpu_mem box obj cls labels img_size 12/399 9.29G 0.04309 0.00547 0 582 256: 100% 20/20 [00:12<00:00, 1.54it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1559 0.145 0.273 0.0698 0.0155 Epoch gpu_mem box obj cls labels img_size 13/399 9.29G 0.04192 0.005447 0 596 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.94s/it] all 1267 1559 0.165 0.352 0.106 0.0231 Epoch gpu_mem box obj cls labels img_size 14/399 9.29G 0.04151 0.005324 0 571 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.95s/it] all 1267 1559 0.178 0.328 0.11 0.0234 Epoch gpu_mem box obj cls labels img_size 15/399 9.29G 0.04061 0.005305 0 663 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.98s/it] all 1267 1559 0.126 0.429 0.0877 0.0166 Epoch gpu_mem box obj cls labels img_size 16/399 9.29G 0.0402 0.005112 0 616 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.148 0.395 0.103 0.0207 Epoch gpu_mem box obj cls labels img_size 17/399 9.29G 0.03996 0.004977 0 550 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1559 0.154 0.48 0.113 0.0226 Epoch gpu_mem box obj cls labels img_size 18/399 9.29G 0.03975 0.00496 0 522 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1559 0.144 0.484 0.118 0.0247 Epoch gpu_mem box obj cls labels img_size 19/399 9.29G 0.03903 0.004922 0 619 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.93s/it] all 1267 1559 0.159 0.483 0.123 0.0251 Epoch gpu_mem box obj cls labels img_size 20/399 9.29G 0.03902 0.004875 0 634 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:06<00:00, 2.02s/it] all 1267 1559 0.195 0.415 0.143 0.0306 Epoch gpu_mem box obj cls labels img_size 21/399 9.29G 0.03883 0.004882 0 601 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.96s/it] all 1267 1559 0.208 0.455 0.163 0.0347 Epoch gpu_mem box obj cls labels img_size 22/399 9.29G 0.03863 0.004742 0 601 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.95s/it] all 1267 1559 0.243 0.43 0.185 0.0416 Epoch gpu_mem box obj cls labels img_size 23/399 9.29G 0.03829 0.004786 0 579 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.266 0.406 0.197 0.045 Epoch gpu_mem box obj cls labels img_size 24/399 9.29G 0.03842 0.004758 0 595 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.88s/it] all 1267 1559 0.246 0.424 0.199 0.0425 Epoch gpu_mem box obj cls labels img_size 25/399 9.29G 0.03814 0.004658 0 614 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:06<00:00, 2.01s/it] all 1267 1559 0.361 0.348 0.246 0.0579 Epoch gpu_mem box obj cls labels img_size 26/399 9.29G 0.03782 0.004702 0 600 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.88s/it] all 1267 1559 0.36 0.359 0.263 0.0606 Epoch gpu_mem box obj cls labels img_size 27/399 9.29G 0.03758 0.004733 0 614 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1559 0.323 0.405 0.257 0.0604 Epoch gpu_mem box obj cls labels img_size 28/399 9.29G 0.0374 0.004642 0 561 256: 100% 20/20 [00:13<00:00, 1.53it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1559 0.365 0.363 0.269 0.062 Epoch gpu_mem box obj cls labels img_size 29/399 9.29G 0.03723 0.004552 0 521 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.72s/it] all 1267 1559 0.403 0.343 0.29 0.0768 Epoch gpu_mem box obj cls labels img_size 30/399 9.29G 0.03716 0.004634 0 599 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1559 0.369 0.38 0.296 0.0692 Epoch gpu_mem box obj cls labels img_size 31/399 9.29G 0.03691 0.004579 0 548 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1559 0.384 0.396 0.285 0.0744 Epoch gpu_mem box obj cls labels img_size 32/399 9.29G 0.03666 0.004517 0 613 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1559 0.402 0.396 0.299 0.0783 Epoch gpu_mem box obj cls labels img_size 33/399 9.29G 0.0369 0.004594 0 610 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.402 0.425 0.337 0.0856 Epoch gpu_mem box obj cls labels img_size 34/399 9.29G 0.03657 0.004577 0 642 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.398 0.435 0.338 0.087 Epoch gpu_mem box obj cls labels img_size 35/399 9.29G 0.03631 0.004393 0 482 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1559 0.444 0.404 0.362 0.0924 Epoch gpu_mem box obj cls labels img_size 36/399 9.29G 0.03636 0.004462 0 647 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.423 0.433 0.359 0.098 Epoch gpu_mem box obj cls labels img_size 37/399 9.29G 0.03611 0.004488 0 562 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1559 0.424 0.442 0.378 0.103 Epoch gpu_mem box obj cls labels img_size 38/399 9.29G 0.0361 0.004467 0 605 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.93s/it] all 1267 1559 0.465 0.412 0.367 0.0991 Epoch gpu_mem box obj cls labels img_size 39/399 9.29G 0.03594 0.004385 0 591 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:06<00:00, 2.05s/it] all 1267 1559 0.335 0.406 0.298 0.066 Epoch gpu_mem box obj cls labels img_size 40/399 9.29G 0.03598 0.004397 0 651 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.84s/it] all 1267 1559 0.461 0.434 0.386 0.102 Epoch gpu_mem box obj cls labels img_size 41/399 9.29G 0.03586 0.004369 0 518 256: 100% 20/20 [00:13<00:00, 1.50it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:04<00:00, 1.66s/it] all 1267 1559 0.44 0.458 0.384 0.107 Epoch gpu_mem box obj cls labels img_size 42/399 9.29G 0.03573 0.004338 0 637 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.95s/it] all 1267 1559 0.492 0.451 0.409 0.116 Epoch gpu_mem box obj cls labels img_size 43/399 9.29G 0.03521 0.004282 0 533 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.459 0.466 0.386 0.107 Epoch gpu_mem box obj cls labels img_size 44/399 9.29G 0.03544 0.004312 0 502 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.92s/it] all 1267 1559 0.484 0.458 0.399 0.111 Epoch gpu_mem box obj cls labels img_size 45/399 9.29G 0.03523 0.004265 0 585 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1559 0.513 0.436 0.392 0.113 Epoch gpu_mem box obj cls labels img_size 46/399 9.29G 0.0351 0.004293 0 616 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1559 0.521 0.449 0.423 0.124 Epoch gpu_mem box obj cls labels img_size 47/399 9.29G 0.03511 0.004334 0 576 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.495 0.432 0.401 0.105 Epoch gpu_mem box obj cls labels img_size 48/399 9.29G 0.03487 0.004222 0 596 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.492 0.467 0.42 0.113 Epoch gpu_mem box obj cls labels img_size 49/399 9.29G 0.0347 0.004257 0 594 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1559 0.455 0.479 0.398 0.115 Epoch gpu_mem box obj cls labels img_size 50/399 9.29G 0.03468 0.004259 0 576 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:04<00:00, 1.64s/it] all 1267 1559 0.493 0.49 0.419 0.123 Epoch gpu_mem box obj cls labels img_size 51/399 9.29G 0.0347 0.00424 0 577 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.528 0.468 0.449 0.123 Epoch gpu_mem box obj cls labels img_size 52/399 9.29G 0.03441 0.004218 0 494 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.514 0.444 0.415 0.12 Epoch gpu_mem box obj cls labels img_size 53/399 9.29G 0.03427 0.004256 0 562 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1559 0.546 0.444 0.448 0.13 Epoch gpu_mem box obj cls labels img_size 54/399 9.29G 0.03432 0.004249 0 563 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1559 0.575 0.468 0.467 0.136 Epoch gpu_mem box obj cls labels img_size 55/399 9.29G 0.03399 0.004144 0 646 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1559 0.514 0.464 0.429 0.113 Epoch gpu_mem box obj cls labels img_size 56/399 9.29G 0.03398 0.004207 0 506 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1559 0.546 0.42 0.416 0.115 Epoch gpu_mem box obj cls labels img_size 57/399 9.29G 0.03416 0.004184 0 613 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.533 0.504 0.463 0.131 Epoch gpu_mem box obj cls labels img_size 58/399 9.29G 0.03409 0.004122 0 575 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.84s/it] all 1267 1559 0.525 0.473 0.45 0.13 Epoch gpu_mem box obj cls labels img_size 59/399 9.29G 0.03404 0.004147 0 612 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.549 0.466 0.453 0.128 Epoch gpu_mem box obj cls labels img_size 60/399 9.29G 0.03403 0.004164 0 620 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.592 0.451 0.468 0.132 Epoch gpu_mem box obj cls labels img_size 61/399 9.29G 0.03402 0.004182 0 630 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:04<00:00, 1.66s/it] all 1267 1559 0.52 0.493 0.448 0.129 Epoch gpu_mem box obj cls labels img_size 62/399 9.29G 0.03386 0.004064 0 618 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1559 0.558 0.471 0.467 0.134 Epoch gpu_mem box obj cls labels img_size 63/399 9.29G 0.03377 0.004212 0 553 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.528 0.485 0.458 0.139 Epoch gpu_mem box obj cls labels img_size 64/399 9.29G 0.0336 0.004092 0 578 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1559 0.502 0.484 0.435 0.132 Epoch gpu_mem box obj cls labels img_size 65/399 9.29G 0.03384 0.004187 0 559 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.70s/it] all 1267 1559 0.471 0.377 0.338 0.0912 Epoch gpu_mem box obj cls labels img_size 66/399 9.29G 0.03364 0.004224 0 666 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.492 0.447 0.404 0.115 Epoch gpu_mem box obj cls labels img_size 67/399 9.29G 0.03344 0.004074 0 528 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1559 0.555 0.479 0.463 0.14 Epoch gpu_mem box obj cls labels img_size 68/399 9.29G 0.03379 0.004087 0 612 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1559 0.553 0.436 0.436 0.123 Epoch gpu_mem box obj cls labels img_size 69/399 9.29G 0.03352 0.004131 0 585 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.76s/it] all 1267 1559 0.465 0.446 0.385 0.111 Epoch gpu_mem box obj cls labels img_size 70/399 9.29G 0.03326 0.004115 0 561 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1559 0.555 0.493 0.464 0.139 Epoch gpu_mem box obj cls labels img_size 71/399 9.29G 0.03337 0.004152 0 538 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.88s/it] all 1267 1559 0.526 0.482 0.46 0.142 Epoch gpu_mem box obj cls labels img_size 72/399 9.29G 0.03342 0.004107 0 482 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1559 0.545 0.479 0.469 0.141 Epoch gpu_mem box obj cls labels img_size 73/399 9.29G 0.03363 0.004094 0 566 256: 100% 20/20 [00:13<00:00, 1.54it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.72s/it] all 1267 1559 0.503 0.504 0.459 0.136 Epoch gpu_mem box obj cls labels img_size 74/399 9.29G 0.03345 0.004146 0 587 256: 100% 20/20 [00:13<00:00, 1.52it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.67s/it] all 1267 1559 0.549 0.487 0.471 0.142 Epoch gpu_mem box obj cls labels img_size 75/399 9.29G 0.03327 0.004097 0 533 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.531 0.488 0.46 0.137 Epoch gpu_mem box obj cls labels img_size 76/399 9.29G 0.0333 0.004112 0 575 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.69s/it] all 1267 1559 0.557 0.473 0.472 0.139 Epoch gpu_mem box obj cls labels img_size 77/399 9.29G 0.03335 0.004065 0 594 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.89s/it] all 1267 1559 0.599 0.452 0.464 0.137 Epoch gpu_mem box obj cls labels img_size 78/399 9.29G 0.03312 0.004134 0 522 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.56 0.464 0.458 0.138 Epoch gpu_mem box obj cls labels img_size 79/399 9.29G 0.03326 0.004076 0 556 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.98s/it] all 1267 1559 0.566 0.47 0.46 0.141 Epoch gpu_mem box obj cls labels img_size 80/399 9.29G 0.0333 0.004034 0 539 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.91s/it] all 1267 1559 0.527 0.442 0.437 0.127 Epoch gpu_mem box obj cls labels img_size 81/399 9.29G 0.03317 0.004099 0 610 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.69s/it] all 1267 1559 0.497 0.516 0.458 0.14 Epoch gpu_mem box obj cls labels img_size 82/399 9.29G 0.03307 0.004065 0 568 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.75s/it] all 1267 1559 0.531 0.491 0.472 0.138 Epoch gpu_mem box obj cls labels img_size 83/399 9.29G 0.03316 0.004112 0 564 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1559 0.517 0.501 0.464 0.146 Epoch gpu_mem box obj cls labels img_size 84/399 9.29G 0.03321 0.004053 0 533 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.75s/it] all 1267 1559 0.527 0.487 0.46 0.137 Epoch gpu_mem box obj cls labels img_size 85/399 9.29G 0.03299 0.00411 0 534 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1559 0.54 0.502 0.469 0.144 Epoch gpu_mem box obj cls labels img_size 86/399 9.29G 0.0331 0.004088 0 531 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.542 0.511 0.483 0.142 Epoch gpu_mem box obj cls labels img_size 87/399 9.29G 0.03295 0.004099 0 573 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.72s/it] all 1267 1559 0.491 0.54 0.469 0.139 Epoch gpu_mem box obj cls labels img_size 88/399 9.29G 0.03309 0.004078 0 526 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.94s/it] all 1267 1559 0.595 0.448 0.477 0.137 Epoch gpu_mem box obj cls labels img_size 89/399 9.29G 0.03324 0.004106 0 616 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1559 0.559 0.51 0.497 0.148 Epoch gpu_mem box obj cls labels img_size 90/399 9.29G 0.03288 0.00408 0 523 256: 100% 20/20 [00:13<00:00, 1.52it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.67s/it] all 1267 1559 0.56 0.493 0.468 0.141 Epoch gpu_mem box obj cls labels img_size 91/399 9.29G 0.03296 0.004037 0 587 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.76s/it] all 1267 1559 0.53 0.503 0.466 0.14 Epoch gpu_mem box obj cls labels img_size 92/399 9.29G 0.03293 0.004095 0 561 256: 100% 20/20 [00:12<00:00, 1.54it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1559 0.552 0.498 0.479 0.142 Epoch gpu_mem box obj cls labels img_size 93/399 9.29G 0.03305 0.004094 0 611 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.519 0.498 0.465 0.143 Epoch gpu_mem box obj cls labels img_size 94/399 9.29G 0.03274 0.004101 0 599 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1559 0.509 0.495 0.455 0.138 Epoch gpu_mem box obj cls labels img_size 95/399 9.29G 0.03307 0.004022 0 551 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1559 0.528 0.518 0.48 0.149 Epoch gpu_mem box obj cls labels img_size 96/399 9.29G 0.03258 0.004079 0 523 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1559 0.492 0.509 0.439 0.134 Epoch gpu_mem box obj cls labels img_size 97/399 9.29G 0.03272 0.004095 0 558 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1559 0.586 0.484 0.493 0.152 Epoch gpu_mem box obj cls labels img_size 98/399 9.29G 0.033 0.004051 0 577 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1559 0.46 0.461 0.393 0.107 Epoch gpu_mem box obj cls labels img_size 99/399 9.29G 0.03257 0.004072 0 542 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.545 0.423 0.423 0.125 Epoch gpu_mem box obj cls labels img_size 100/399 9.29G 0.03274 0.003978 0 642 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.69s/it] all 1267 1559 0.541 0.471 0.453 0.137 Epoch gpu_mem box obj cls labels img_size 101/399 9.29G 0.03273 0.004012 0 598 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.91s/it] all 1267 1559 0.53 0.53 0.491 0.151 Epoch gpu_mem box obj cls labels img_size 102/399 9.29G 0.03267 0.004146 0 568 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1559 0.572 0.506 0.496 0.149 Epoch gpu_mem box obj cls labels img_size 103/399 9.29G 0.03248 0.004075 0 557 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1559 0.539 0.493 0.478 0.146 Epoch gpu_mem box obj cls labels img_size 104/399 9.29G 0.03274 0.003968 0 527 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.76s/it] all 1267 1559 0.524 0.521 0.485 0.149 Epoch gpu_mem box obj cls labels img_size 105/399 9.29G 0.03257 0.004024 0 593 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.76s/it] all 1267 1559 0.556 0.504 0.49 0.146 Epoch gpu_mem box obj cls labels img_size 106/399 9.29G 0.03247 0.004056 0 582 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.515 0.532 0.474 0.147 Epoch gpu_mem box obj cls labels img_size 107/399 9.29G 0.03272 0.004111 0 633 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.528 0.493 0.471 0.148 Epoch gpu_mem box obj cls labels img_size 108/399 9.29G 0.03276 0.004026 0 552 256: 100% 20/20 [00:12<00:00, 1.54it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.71s/it] all 1267 1559 0.504 0.436 0.425 0.119 Epoch gpu_mem box obj cls labels img_size 109/399 9.29G 0.0325 0.004057 0 533 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.71s/it] all 1267 1559 0.531 0.513 0.49 0.145 Epoch gpu_mem box obj cls labels img_size 110/399 9.29G 0.03245 0.004053 0 556 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.84s/it] all 1267 1559 0.517 0.5 0.459 0.143 Epoch gpu_mem box obj cls labels img_size 111/399 9.29G 0.03251 0.00407 0 632 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1559 0.545 0.49 0.464 0.144 Epoch gpu_mem box obj cls labels img_size 112/399 9.29G 0.03258 0.004 0 600 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1559 0.519 0.516 0.474 0.15 Epoch gpu_mem box obj cls labels img_size 113/399 9.29G 0.03256 0.004068 0 634 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.605 0.477 0.501 0.154 Epoch gpu_mem box obj cls labels img_size 114/399 9.29G 0.03259 0.004104 0 591 256: 100% 20/20 [00:13<00:00, 1.53it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.76s/it] all 1267 1559 0.589 0.466 0.476 0.145 Epoch gpu_mem box obj cls labels img_size 115/399 9.29G 0.03268 0.004037 0 590 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.91s/it] all 1267 1559 0.506 0.541 0.485 0.153 Epoch gpu_mem box obj cls labels img_size 116/399 9.29G 0.03243 0.004022 0 572 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1559 0.553 0.482 0.469 0.148 Epoch gpu_mem box obj cls labels img_size 117/399 9.29G 0.03234 0.00398 0 567 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1559 0.535 0.495 0.465 0.141 Epoch gpu_mem box obj cls labels img_size 118/399 9.29G 0.03245 0.004024 0 567 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.75s/it] all 1267 1559 0.511 0.497 0.466 0.142 Epoch gpu_mem box obj cls labels img_size 119/399 9.29G 0.03236 0.004078 0 583 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.97s/it] all 1267 1559 0.55 0.47 0.465 0.147 Epoch gpu_mem box obj cls labels img_size 120/399 9.29G 0.0322 0.004075 0 555 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1559 0.564 0.507 0.497 0.154 Epoch gpu_mem box obj cls labels img_size 121/399 9.29G 0.03252 0.004059 0 529 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.73s/it] all 1267 1559 0.528 0.517 0.496 0.151 Epoch gpu_mem box obj cls labels img_size 122/399 9.29G 0.03219 0.004001 0 570 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1559 0.559 0.47 0.476 0.145 Epoch gpu_mem box obj cls labels img_size 123/399 9.29G 0.0323 0.004021 0 596 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.557 0.518 0.507 0.151 Epoch gpu_mem box obj cls labels img_size 124/399 9.29G 0.03245 0.004065 0 586 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1559 0.58 0.483 0.493 0.156 Epoch gpu_mem box obj cls labels img_size 125/399 9.29G 0.03236 0.004042 0 579 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.507 0.526 0.483 0.143 Epoch gpu_mem box obj cls labels img_size 126/399 9.29G 0.03264 0.004092 0 629 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.75s/it] all 1267 1559 0.563 0.47 0.479 0.138 Epoch gpu_mem box obj cls labels img_size 127/399 9.29G 0.03224 0.004066 0 528 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1559 0.526 0.505 0.465 0.144 Epoch gpu_mem box obj cls labels img_size 128/399 9.29G 0.03241 0.004024 0 607 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.84s/it] all 1267 1559 0.527 0.513 0.486 0.152 Epoch gpu_mem box obj cls labels img_size 129/399 9.29G 0.03235 0.004042 0 618 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1559 0.519 0.501 0.48 0.149 Epoch gpu_mem box obj cls labels img_size 130/399 9.29G 0.03216 0.004041 0 580 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.89s/it] all 1267 1559 0.568 0.49 0.494 0.155 Epoch gpu_mem box obj cls labels img_size 131/399 9.29G 0.03229 0.003978 0 513 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.88s/it] all 1267 1559 0.567 0.461 0.466 0.146 Epoch gpu_mem box obj cls labels img_size 132/399 9.29G 0.03235 0.004049 0 559 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.573 0.476 0.495 0.154 Epoch gpu_mem box obj cls labels img_size 133/399 9.29G 0.03227 0.004076 0 604 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1559 0.544 0.528 0.503 0.15 Epoch gpu_mem box obj cls labels img_size 134/399 9.29G 0.03227 0.004021 0 512 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.527 0.482 0.465 0.135 Epoch gpu_mem box obj cls labels img_size 135/399 9.29G 0.03242 0.004068 0 667 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1559 0.613 0.423 0.474 0.142 Epoch gpu_mem box obj cls labels img_size 136/399 9.29G 0.0324 0.004056 0 601 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.89s/it] all 1267 1559 0.499 0.535 0.475 0.149 Epoch gpu_mem box obj cls labels img_size 137/399 9.29G 0.03211 0.003953 0 598 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.67s/it] all 1267 1559 0.544 0.504 0.489 0.154 Epoch gpu_mem box obj cls labels img_size 138/399 9.29G 0.03248 0.004063 0 597 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.70s/it] all 1267 1559 0.558 0.492 0.491 0.15 Epoch gpu_mem box obj cls labels img_size 139/399 9.29G 0.03222 0.003957 0 619 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1559 0.531 0.523 0.495 0.147 Epoch gpu_mem box obj cls labels img_size 140/399 9.29G 0.03225 0.004064 0 567 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.69s/it] all 1267 1559 0.559 0.493 0.486 0.15 Epoch gpu_mem box obj cls labels img_size 141/399 9.29G 0.03213 0.003985 0 629 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1559 0.587 0.48 0.5 0.152 Epoch gpu_mem box obj cls labels img_size 142/399 9.29G 0.03202 0.004109 0 626 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.75s/it] all 1267 1559 0.555 0.502 0.489 0.138 Epoch gpu_mem box obj cls labels img_size 143/399 9.29G 0.03243 0.004081 0 600 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.538 0.491 0.457 0.144 Epoch gpu_mem box obj cls labels img_size 144/399 9.29G 0.03206 0.00405 0 591 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.84s/it] all 1267 1559 0.586 0.471 0.485 0.153 Epoch gpu_mem box obj cls labels img_size 145/399 9.29G 0.03195 0.003926 0 567 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1559 0.584 0.477 0.496 0.155 Epoch gpu_mem box obj cls labels img_size 146/399 9.29G 0.03198 0.004047 0 583 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.56 0.497 0.497 0.158 Epoch gpu_mem box obj cls labels img_size 147/399 9.29G 0.03212 0.004004 0 599 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.71s/it] all 1267 1559 0.594 0.466 0.486 0.149 Epoch gpu_mem box obj cls labels img_size 148/399 9.29G 0.03205 0.003986 0 499 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.73s/it] all 1267 1559 0.562 0.498 0.498 0.156 Epoch gpu_mem box obj cls labels img_size 149/399 9.29G 0.03202 0.004069 0 585 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.73s/it] all 1267 1559 0.525 0.478 0.455 0.144 Epoch gpu_mem box obj cls labels img_size 150/399 9.29G 0.03202 0.004072 0 589 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.478 0.568 0.491 0.158 Epoch gpu_mem box obj cls labels img_size 151/399 9.29G 0.03219 0.003967 0 551 256: 100% 20/20 [00:12<00:00, 1.67it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.91s/it] all 1267 1559 0.505 0.542 0.494 0.158 Epoch gpu_mem box obj cls labels img_size 152/399 9.29G 0.0319 0.00405 0 601 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1559 0.556 0.481 0.482 0.151 Epoch gpu_mem box obj cls labels img_size 153/399 9.29G 0.03199 0.003927 0 574 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1559 0.533 0.525 0.491 0.152 Epoch gpu_mem box obj cls labels img_size 154/399 9.29G 0.03202 0.004068 0 527 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.549 0.507 0.496 0.153 Epoch gpu_mem box obj cls labels img_size 155/399 9.29G 0.03205 0.00405 0 590 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1559 0.549 0.484 0.478 0.153 Epoch gpu_mem box obj cls labels img_size 156/399 9.29G 0.03211 0.00407 0 607 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1559 0.531 0.487 0.472 0.149 Epoch gpu_mem box obj cls labels img_size 157/399 9.29G 0.03201 0.003989 0 580 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.88s/it] all 1267 1559 0.577 0.489 0.501 0.153 Epoch gpu_mem box obj cls labels img_size 158/399 9.29G 0.03205 0.004029 0 686 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.75s/it] all 1267 1559 0.532 0.493 0.485 0.15 Epoch gpu_mem box obj cls labels img_size 159/399 9.29G 0.03187 0.003991 0 550 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.95s/it] all 1267 1559 0.533 0.502 0.47 0.15 Epoch gpu_mem box obj cls labels img_size 160/399 9.29G 0.0321 0.004043 0 571 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.93s/it] all 1267 1559 0.523 0.522 0.484 0.149 Epoch gpu_mem box obj cls labels img_size 161/399 9.29G 0.03195 0.003916 0 586 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1559 0.523 0.516 0.488 0.151 Epoch gpu_mem box obj cls labels img_size 162/399 9.29G 0.0318 0.003999 0 590 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1559 0.553 0.491 0.479 0.146 Epoch gpu_mem box obj cls labels img_size 163/399 9.29G 0.03171 0.004033 0 564 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.532 0.499 0.473 0.15 Epoch gpu_mem box obj cls labels img_size 164/399 9.29G 0.0318 0.00397 0 553 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1559 0.523 0.483 0.455 0.145 Epoch gpu_mem box obj cls labels img_size 165/399 9.29G 0.03195 0.004003 0 592 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1559 0.527 0.484 0.466 0.146 Epoch gpu_mem box obj cls labels img_size 166/399 9.29G 0.03195 0.004024 0 606 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.73s/it] all 1267 1559 0.561 0.482 0.489 0.151 Epoch gpu_mem box obj cls labels img_size 167/399 9.29G 0.03194 0.004054 0 657 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1559 0.558 0.474 0.465 0.147 Epoch gpu_mem box obj cls labels img_size 168/399 9.29G 0.03193 0.003952 0 616 256: 100% 20/20 [00:11<00:00, 1.67it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.556 0.463 0.473 0.144 Epoch gpu_mem box obj cls labels img_size 169/399 9.29G 0.0319 0.003957 0 572 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:04<00:00, 1.66s/it] all 1267 1559 0.56 0.488 0.487 0.151 Epoch gpu_mem box obj cls labels img_size 170/399 9.29G 0.03187 0.004009 0 606 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.73s/it] all 1267 1559 0.509 0.48 0.447 0.136 Epoch gpu_mem box obj cls labels img_size 171/399 9.29G 0.03207 0.004054 0 571 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1559 0.517 0.52 0.471 0.149 Epoch gpu_mem box obj cls labels img_size 172/399 9.29G 0.03179 0.003971 0 570 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1559 0.57 0.491 0.484 0.152 Epoch gpu_mem box obj cls labels img_size 173/399 9.29G 0.03171 0.00391 0 554 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1559 0.562 0.511 0.498 0.155 Epoch gpu_mem box obj cls labels img_size 174/399 9.29G 0.03165 0.004 0 545 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1559 0.538 0.49 0.468 0.144 Epoch gpu_mem box obj cls labels img_size 175/399 9.29G 0.0317 0.003997 0 600 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1559 0.552 0.491 0.481 0.152 Epoch gpu_mem box obj cls labels img_size 176/399 9.29G 0.03179 0.003962 0 572 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1559 0.536 0.509 0.49 0.154 EarlyStopping patience 30 exceeded, stopping training. 177 epochs completed in 0.917 hours. Optimizer stripped from runs/train/project-yolov5s-256-400-fold-0/weights/last.pt, 14.3MB Optimizer stripped from runs/train/project-yolov5s-256-400-fold-0/weights/best.pt, 14.3MB wandb: Waiting for W&B process to finish, PID 589 wandb: Program ended successfully. wandb: wandb: Find user logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210828_231857-5882t4to/logs/debug.log wandb: Find internal logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210828_231857-5882t4to/logs/debug-internal.log wandb: Run summary: wandb: train/box_loss 0.03179 wandb: train/obj_loss 0.00396 wandb: train/cls_loss 0.0 wandb: metrics/precision 0.53619 wandb: metrics/recall 0.50866 wandb: metrics/mAP_0.5 0.49022 wandb: metrics/mAP_0.5:0.95 0.15387 wandb: val/box_loss 0.03237 wandb: val/obj_loss 0.00183 wandb: val/cls_loss 0.0 wandb: x/lr0 0.00207 wandb: x/lr1 0.00207 wandb: x/lr2 0.00207 wandb: _runtime 3325 wandb: _timestamp 1630196062 wandb: _step 177 wandb: Run history: wandb: train/box_loss █▆▄▃▃▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: train/obj_loss ▁▅██▆▅▅▄▄▃▃▃▃▃▃▂▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂ wandb: train/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: metrics/precision ▁▁▂▃▃▄▅▅▆▆▇▆▇▇▇█▇▇▇▇█▇▇▇▇▇▇▇▇▇▇▇▇█▇█▇▇█▇ wandb: metrics/recall ▁▁▃▅▇▆▆▆▆▇▆▇▇▇▇▇▇▇█▇█▇▆▇▇▇▇██▇██▇▇█▇█▇▇▇ wandb: metrics/mAP_0.5 ▁▁▁▂▃▄▅▅▆▆▆▇▇▇▇▇▇▇▇██▇▇██▇▇█████▇████▇██ wandb: metrics/mAP_0.5:0.95 ▁▁▁▂▂▃▄▄▅▆▆▆▇▇▇▇▇▇▇▇█▇▇▇█▇▇█▇███▇████▇██ wandb: val/box_loss █▆▄▃▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/obj_loss ▅▇██▆▅▄▄▃▃▂▂▂▂▁▁▁▂▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: x/lr0 ▁▂▂▃▃▄▅▅▆▇▇█████████▇▇▇▇▇▇▇▇▇▇▇▆▆▆▆▆▆▆▆▆ wandb: x/lr1 ▁▂▂▃▃▄▅▅▆▇▇█████████▇▇▇▇▇▇▇▇▇▇▇▆▆▆▆▆▆▆▆▆ wandb: x/lr2 █▇▇▆▆▅▄▄▃▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: _runtime ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇▇███ wandb: _timestamp ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇▇███ wandb: _step ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇███ wandb: wandb: Synced 5 W&B file(s), 134 media file(s), 1 artifact file(s) and 0 other file(s) wandb: wandb: Synced project-yolov5s-256-400-fold-0: https://wandb.ai/itamardvir/YOLOv5/runs/5882t4to Results saved to runs/train/project-yolov5s-256-400-fold-0 train: weights=yolov5s.pt, cfg=, data=/content/yolo-ds/config_fold_1.yaml, hyp=data/hyps/hyp.finetune.yaml, epochs=300, batch_size=256, imgsz=256, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, entity=None, name=project-yolov5s-256-300-fold-1, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0, patience=30 remote: Enumerating objects: 14, done. remote: Counting objects: 100% (14/14), done. remote: Compressing objects: 100% (7/7), done. remote: Total 14 (delta 7), reused 14 (delta 7), pack-reused 0 Unpacking objects: 100% (14/14), done. From https://github.com/ultralytics/yolov5 56e003b..a82d3f5 feature/annotator -> origin/feature/annotator github: up to date with https://github.com/ultralytics/yolov5 ✅ YOLOv5 🚀 2021-8-28 torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) hyperparameters: lr0=0.0032, lrf=0.12, momentum=0.843, weight_decay=0.00036, warmup_epochs=2.0, warmup_momentum=0.5, warmup_bias_lr=0.05, box=0.0296, cls=0.243, cls_pw=0.631, obj=0.301, obj_pw=0.911, iou_t=0.2, anchor_t=2.91, fl_gamma=0.0, hsv_h=0.0138, hsv_s=0.664, hsv_v=0.464, degrees=0.373, translate=0.245, scale=0.898, shear=0.602, perspective=0.0, flipud=0.00856, fliplr=0.5, mosaic=1.0, mixup=0.243, copy_paste=0.0 TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ wandb: Currently logged in as: itamardvir (use `wandb login --relogin` to force relogin) wandb: Tracking run with wandb version 0.12.1 wandb: Syncing run project-yolov5s-256-300-fold-1 wandb: ⭐️ View project at https://wandb.ai/itamardvir/YOLOv5 wandb: 🚀 View run at https://wandb.ai/itamardvir/YOLOv5/runs/1ymi2p3h wandb: Run data is saved locally in /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_001434-1ymi2p3h wandb: Run `wandb offline` to turn off syncing. Overriding model.yaml nc=80 with nc=1 from n params module arguments 0 -1 1 3520 models.common.Focus [3, 32, 3] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] 4 -1 3 156928 models.common.C3 [128, 128, 3] 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] 6 -1 3 625152 models.common.C3 [256, 256, 3] 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]] 9 -1 1 1182720 models.common.C3 [512, 512, 1, False] 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 361984 models.common.C3 [512, 256, 1, False] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 90880 models.common.C3 [256, 128, 1, False] 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 296448 models.common.C3 [256, 256, 1, False] 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] 24 [17, 20, 23] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]] Model Summary: 283 layers, 7063542 parameters, 7063542 gradients, 16.4 GFLOPs Transferred 356/362 items from yolov5s.pt Scaled weight_decay = 0.00144 optimizer: SGD with parameter groups 59 weight, 62 weight (no decay), 62 bias albumentations: Blur(always_apply=False, p=0.1, blur_limit=(3, 7)), MedianBlur(always_apply=False, p=0.1, blur_limit=(3, 7)), ToGray(always_apply=False, p=0.01) train: Scanning '/content/yolo-ds/train_fold_1' images and labels...5067 found, 0 missing, 1630 empty, 0 corrupted: 100% 5067/5067 [00:00<00:00, 6516.86it/s] train: New cache created: /content/yolo-ds/train_fold_1.cache val: Scanning '/content/yolo-ds/val_fold_1' images and labels...1267 found, 0 missing, 410 empty, 0 corrupted: 100% 1267/1267 [00:00<00:00, 2591.54it/s] val: New cache created: /content/yolo-ds/val_fold_1.cache [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) Plotting labels... autoanchor: Analyzing anchors... anchors/target = 4.37, Best Possible Recall (BPR) = 0.9998 Image sizes 256 train, 256 val Using 4 dataloader workers Logging results to runs/train/project-yolov5s-256-300-fold-1 Starting training for 300 epochs... Epoch gpu_mem box obj cls labels img_size 0/299 7.93G 0.06806 0.00326 0 630 256: 100% 20/20 [00:14<00:00, 1.40it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.98s/it] all 1267 1569 0.0105 0.0701 0.00373 0.000587 Epoch gpu_mem box obj cls labels img_size 1/299 9.29G 0.06392 0.003749 0 597 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.92s/it] all 1267 1569 0.017 0.0331 0.00456 0.000711 Epoch gpu_mem box obj cls labels img_size 2/299 9.29G 0.06101 0.004061 0 575 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:06<00:00, 2.01s/it] all 1267 1569 0.0116 0.0478 0.00445 0.000717 Epoch gpu_mem box obj cls labels img_size 3/299 9.29G 0.05862 0.004299 0 590 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.92s/it] all 1267 1569 0.0123 0.152 0.00596 0.000958 Epoch gpu_mem box obj cls labels img_size 4/299 9.29G 0.05667 0.00448 0 639 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.96s/it] all 1267 1569 0.018 0.0975 0.00654 0.00108 Epoch gpu_mem box obj cls labels img_size 5/299 9.29G 0.05421 0.004737 0 585 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.92s/it] all 1267 1569 0.0251 0.0854 0.00904 0.00156 Epoch gpu_mem box obj cls labels img_size 6/299 9.29G 0.0523 0.00499 0 572 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1569 0.0263 0.106 0.0107 0.00192 Epoch gpu_mem box obj cls labels img_size 7/299 9.29G 0.05053 0.005166 0 592 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.95s/it] all 1267 1569 0.0247 0.118 0.0101 0.00187 Epoch gpu_mem box obj cls labels img_size 8/299 9.29G 0.04856 0.005255 0 500 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.92s/it] all 1267 1569 0.0372 0.134 0.0139 0.00255 Epoch gpu_mem box obj cls labels img_size 9/299 9.29G 0.04703 0.005396 0 563 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.99s/it] all 1267 1569 0.0514 0.174 0.0234 0.00462 Epoch gpu_mem box obj cls labels img_size 10/299 9.29G 0.04553 0.005535 0 533 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 2.00s/it] all 1267 1569 0.0858 0.224 0.0395 0.00814 Epoch gpu_mem box obj cls labels img_size 11/299 9.29G 0.04409 0.005575 0 622 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.98s/it] all 1267 1569 0.13 0.239 0.0666 0.014 Epoch gpu_mem box obj cls labels img_size 12/299 9.29G 0.04315 0.0055 0 602 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.97s/it] all 1267 1569 0.125 0.331 0.0723 0.0158 Epoch gpu_mem box obj cls labels img_size 13/299 9.29G 0.04185 0.005457 0 567 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1569 0.163 0.296 0.0859 0.0192 Epoch gpu_mem box obj cls labels img_size 14/299 9.29G 0.04142 0.005376 0 599 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.96s/it] all 1267 1569 0.153 0.347 0.0953 0.0198 Epoch gpu_mem box obj cls labels img_size 15/299 9.29G 0.04056 0.005368 0 639 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 2.00s/it] all 1267 1569 0.152 0.346 0.0948 0.0197 Epoch gpu_mem box obj cls labels img_size 16/299 9.29G 0.04003 0.005195 0 584 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:06<00:00, 2.00s/it] all 1267 1569 0.138 0.443 0.0951 0.0195 Epoch gpu_mem box obj cls labels img_size 17/299 9.29G 0.03955 0.005045 0 539 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.92s/it] all 1267 1569 0.152 0.391 0.103 0.0215 Epoch gpu_mem box obj cls labels img_size 18/299 9.29G 0.03938 0.005023 0 569 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.98s/it] all 1267 1569 0.167 0.448 0.122 0.0255 Epoch gpu_mem box obj cls labels img_size 19/299 9.29G 0.03902 0.004972 0 587 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.94s/it] all 1267 1569 0.159 0.424 0.114 0.0243 Epoch gpu_mem box obj cls labels img_size 20/299 9.29G 0.03882 0.004922 0 642 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.93s/it] all 1267 1569 0.183 0.405 0.134 0.029 Epoch gpu_mem box obj cls labels img_size 21/299 9.29G 0.03855 0.004856 0 566 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1569 0.217 0.415 0.153 0.0338 Epoch gpu_mem box obj cls labels img_size 22/299 9.29G 0.03855 0.004758 0 609 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.98s/it] all 1267 1569 0.241 0.335 0.162 0.0369 Epoch gpu_mem box obj cls labels img_size 23/299 9.29G 0.03835 0.004781 0 618 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.96s/it] all 1267 1569 0.262 0.387 0.193 0.0458 Epoch gpu_mem box obj cls labels img_size 24/299 9.29G 0.03801 0.004799 0 572 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.98s/it] all 1267 1569 0.267 0.397 0.201 0.0471 Epoch gpu_mem box obj cls labels img_size 25/299 9.29G 0.03775 0.004727 0 595 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.99s/it] all 1267 1569 0.316 0.385 0.229 0.0575 Epoch gpu_mem box obj cls labels img_size 26/299 9.29G 0.03759 0.004735 0 573 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.72s/it] all 1267 1569 0.33 0.341 0.228 0.0574 Epoch gpu_mem box obj cls labels img_size 27/299 9.29G 0.03795 0.004667 0 562 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.97s/it] all 1267 1569 0.334 0.38 0.248 0.0598 Epoch gpu_mem box obj cls labels img_size 28/299 9.29G 0.03724 0.004631 0 577 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1569 0.388 0.359 0.276 0.0673 Epoch gpu_mem box obj cls labels img_size 29/299 9.29G 0.03707 0.004563 0 495 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.89s/it] all 1267 1569 0.354 0.388 0.27 0.0691 Epoch gpu_mem box obj cls labels img_size 30/299 9.29G 0.03732 0.004652 0 582 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.89s/it] all 1267 1569 0.278 0.365 0.209 0.0471 Epoch gpu_mem box obj cls labels img_size 31/299 9.29G 0.03692 0.004574 0 548 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1569 0.349 0.353 0.259 0.0647 Epoch gpu_mem box obj cls labels img_size 32/299 9.29G 0.03668 0.004548 0 626 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1569 0.416 0.386 0.311 0.0828 Epoch gpu_mem box obj cls labels img_size 33/299 9.29G 0.03649 0.004585 0 655 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.403 0.386 0.305 0.0815 Epoch gpu_mem box obj cls labels img_size 34/299 9.29G 0.03654 0.004509 0 598 256: 100% 20/20 [00:12<00:00, 1.66it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.97s/it] all 1267 1569 0.417 0.388 0.314 0.0845 Epoch gpu_mem box obj cls labels img_size 35/299 9.29G 0.03627 0.004459 0 536 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.84s/it] all 1267 1569 0.418 0.381 0.326 0.0847 Epoch gpu_mem box obj cls labels img_size 36/299 9.29G 0.03613 0.004443 0 616 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.92s/it] all 1267 1569 0.347 0.398 0.271 0.0641 Epoch gpu_mem box obj cls labels img_size 37/299 9.29G 0.03611 0.004482 0 545 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.73s/it] all 1267 1569 0.434 0.379 0.316 0.0816 Epoch gpu_mem box obj cls labels img_size 38/299 9.29G 0.03589 0.004424 0 580 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.412 0.394 0.319 0.0805 Epoch gpu_mem box obj cls labels img_size 39/299 9.29G 0.03613 0.004398 0 567 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.89s/it] all 1267 1569 0.483 0.392 0.361 0.0979 Epoch gpu_mem box obj cls labels img_size 40/299 9.29G 0.03548 0.00439 0 639 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1569 0.472 0.404 0.368 0.0994 Epoch gpu_mem box obj cls labels img_size 41/299 9.29G 0.03556 0.004376 0 581 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.94s/it] all 1267 1569 0.435 0.41 0.363 0.0966 Epoch gpu_mem box obj cls labels img_size 42/299 9.29G 0.03524 0.004377 0 605 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1569 0.471 0.431 0.385 0.107 Epoch gpu_mem box obj cls labels img_size 43/299 9.29G 0.03515 0.004324 0 526 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.43 0.436 0.342 0.0896 Epoch gpu_mem box obj cls labels img_size 44/299 9.29G 0.03545 0.004278 0 506 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1569 0.447 0.398 0.355 0.0961 Epoch gpu_mem box obj cls labels img_size 45/299 9.29G 0.03498 0.004244 0 562 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1569 0.475 0.453 0.385 0.103 Epoch gpu_mem box obj cls labels img_size 46/299 9.29G 0.03491 0.004308 0 599 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.76s/it] all 1267 1569 0.442 0.439 0.369 0.0982 Epoch gpu_mem box obj cls labels img_size 47/299 9.29G 0.03478 0.004342 0 600 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.477 0.429 0.383 0.104 Epoch gpu_mem box obj cls labels img_size 48/299 9.29G 0.0348 0.004245 0 563 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1569 0.429 0.423 0.342 0.0929 Epoch gpu_mem box obj cls labels img_size 49/299 9.29G 0.03457 0.004239 0 624 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.488 0.41 0.382 0.105 Epoch gpu_mem box obj cls labels img_size 50/299 9.29G 0.03448 0.004271 0 619 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1569 0.487 0.421 0.368 0.101 Epoch gpu_mem box obj cls labels img_size 51/299 9.29G 0.03449 0.00422 0 587 256: 100% 20/20 [00:12<00:00, 1.66it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1569 0.49 0.422 0.382 0.107 Epoch gpu_mem box obj cls labels img_size 52/299 9.29G 0.03434 0.00422 0 543 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.444 0.437 0.374 0.101 Epoch gpu_mem box obj cls labels img_size 53/299 9.29G 0.03428 0.004147 0 605 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.72s/it] all 1267 1569 0.453 0.425 0.369 0.105 Epoch gpu_mem box obj cls labels img_size 54/299 9.29G 0.03417 0.004241 0 559 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1569 0.485 0.477 0.412 0.113 Epoch gpu_mem box obj cls labels img_size 55/299 9.29G 0.03429 0.004102 0 636 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1569 0.459 0.449 0.389 0.11 Epoch gpu_mem box obj cls labels img_size 56/299 9.29G 0.03405 0.004208 0 563 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1569 0.505 0.467 0.429 0.12 Epoch gpu_mem box obj cls labels img_size 57/299 9.29G 0.0341 0.004213 0 615 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1569 0.527 0.43 0.408 0.114 Epoch gpu_mem box obj cls labels img_size 58/299 9.29G 0.03379 0.004167 0 639 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1569 0.47 0.49 0.399 0.116 Epoch gpu_mem box obj cls labels img_size 59/299 9.29G 0.03386 0.004149 0 654 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1569 0.474 0.457 0.396 0.106 Epoch gpu_mem box obj cls labels img_size 60/299 9.29G 0.0336 0.004172 0 595 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.95s/it] all 1267 1569 0.467 0.483 0.406 0.118 Epoch gpu_mem box obj cls labels img_size 61/299 9.29G 0.03384 0.004197 0 630 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1569 0.518 0.453 0.418 0.118 Epoch gpu_mem box obj cls labels img_size 62/299 9.29G 0.0337 0.004021 0 597 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1569 0.531 0.46 0.425 0.117 Epoch gpu_mem box obj cls labels img_size 63/299 9.29G 0.03353 0.00418 0 528 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.526 0.448 0.425 0.123 Epoch gpu_mem box obj cls labels img_size 64/299 9.29G 0.03334 0.004106 0 602 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.486 0.458 0.413 0.119 Epoch gpu_mem box obj cls labels img_size 65/299 9.29G 0.03354 0.004205 0 545 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1569 0.5 0.48 0.422 0.121 Epoch gpu_mem box obj cls labels img_size 66/299 9.29G 0.03365 0.004178 0 652 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.88s/it] all 1267 1569 0.486 0.434 0.394 0.109 Epoch gpu_mem box obj cls labels img_size 67/299 9.29G 0.03319 0.004097 0 551 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.515 0.458 0.436 0.129 Epoch gpu_mem box obj cls labels img_size 68/299 9.29G 0.0335 0.004125 0 619 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1569 0.434 0.366 0.331 0.088 Epoch gpu_mem box obj cls labels img_size 69/299 9.29G 0.03325 0.004112 0 607 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.453 0.495 0.433 0.126 Epoch gpu_mem box obj cls labels img_size 70/299 9.29G 0.03323 0.004071 0 515 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.71s/it] all 1267 1569 0.5 0.489 0.437 0.132 Epoch gpu_mem box obj cls labels img_size 71/299 9.29G 0.03337 0.004169 0 587 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.69s/it] all 1267 1569 0.503 0.429 0.401 0.113 Epoch gpu_mem box obj cls labels img_size 72/299 9.29G 0.03318 0.004082 0 498 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.528 0.459 0.427 0.127 Epoch gpu_mem box obj cls labels img_size 73/299 9.29G 0.03341 0.004103 0 576 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1569 0.485 0.422 0.393 0.116 Epoch gpu_mem box obj cls labels img_size 74/299 9.29G 0.03311 0.004081 0 578 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1569 0.477 0.451 0.402 0.119 Epoch gpu_mem box obj cls labels img_size 75/299 9.29G 0.03321 0.004125 0 532 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1569 0.553 0.442 0.444 0.129 Epoch gpu_mem box obj cls labels img_size 76/299 9.29G 0.03306 0.004108 0 594 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.83s/it] all 1267 1569 0.558 0.442 0.442 0.129 Epoch gpu_mem box obj cls labels img_size 77/299 9.29G 0.0329 0.004091 0 566 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.88s/it] all 1267 1569 0.541 0.435 0.441 0.131 Epoch gpu_mem box obj cls labels img_size 78/299 9.29G 0.03316 0.004144 0 549 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:04<00:00, 1.62s/it] all 1267 1569 0.546 0.446 0.439 0.127 Epoch gpu_mem box obj cls labels img_size 79/299 9.29G 0.03292 0.004021 0 551 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1569 0.541 0.452 0.439 0.128 Epoch gpu_mem box obj cls labels img_size 80/299 9.29G 0.03286 0.004013 0 573 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.89s/it] all 1267 1569 0.523 0.453 0.423 0.125 Epoch gpu_mem box obj cls labels img_size 81/299 9.29G 0.03289 0.004025 0 579 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.73s/it] all 1267 1569 0.589 0.402 0.424 0.123 Epoch gpu_mem box obj cls labels img_size 82/299 9.29G 0.03296 0.004097 0 571 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.507 0.412 0.401 0.113 Epoch gpu_mem box obj cls labels img_size 83/299 9.29G 0.03288 0.004129 0 526 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.91s/it] all 1267 1569 0.523 0.457 0.421 0.129 Epoch gpu_mem box obj cls labels img_size 84/299 9.29G 0.03286 0.004065 0 575 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1569 0.549 0.44 0.423 0.122 Epoch gpu_mem box obj cls labels img_size 85/299 9.29G 0.03289 0.004087 0 601 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.73s/it] all 1267 1569 0.547 0.44 0.432 0.125 Epoch gpu_mem box obj cls labels img_size 86/299 9.29G 0.03287 0.004041 0 531 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1569 0.599 0.438 0.459 0.136 Epoch gpu_mem box obj cls labels img_size 87/299 9.29G 0.03259 0.004082 0 557 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1569 0.545 0.448 0.449 0.131 Epoch gpu_mem box obj cls labels img_size 88/299 9.29G 0.0328 0.004073 0 571 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.70s/it] all 1267 1569 0.533 0.458 0.439 0.128 Epoch gpu_mem box obj cls labels img_size 89/299 9.29G 0.03271 0.004164 0 675 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.91s/it] all 1267 1569 0.564 0.457 0.457 0.133 Epoch gpu_mem box obj cls labels img_size 90/299 9.29G 0.03267 0.004016 0 541 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.483 0.482 0.44 0.13 Epoch gpu_mem box obj cls labels img_size 91/299 9.29G 0.03274 0.004042 0 582 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.71s/it] all 1267 1569 0.543 0.449 0.439 0.133 Epoch gpu_mem box obj cls labels img_size 92/299 9.29G 0.03262 0.004062 0 544 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1569 0.564 0.438 0.439 0.128 Epoch gpu_mem box obj cls labels img_size 93/299 9.29G 0.03276 0.004034 0 562 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.70s/it] all 1267 1569 0.568 0.424 0.436 0.128 Epoch gpu_mem box obj cls labels img_size 94/299 9.29G 0.03281 0.004065 0 646 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.72s/it] all 1267 1569 0.539 0.459 0.45 0.134 Epoch gpu_mem box obj cls labels img_size 95/299 9.29G 0.03264 0.004019 0 568 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1569 0.559 0.451 0.443 0.133 Epoch gpu_mem box obj cls labels img_size 96/299 9.29G 0.03265 0.004044 0 520 256: 100% 20/20 [00:12<00:00, 1.65it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.516 0.45 0.42 0.129 Epoch gpu_mem box obj cls labels img_size 97/299 9.29G 0.03252 0.004134 0 580 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1569 0.59 0.419 0.442 0.129 Epoch gpu_mem box obj cls labels img_size 98/299 9.29G 0.03274 0.004031 0 616 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.87s/it] all 1267 1569 0.506 0.486 0.451 0.136 Epoch gpu_mem box obj cls labels img_size 99/299 9.29G 0.0324 0.004035 0 522 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.84s/it] all 1267 1569 0.486 0.468 0.421 0.127 Epoch gpu_mem box obj cls labels img_size 100/299 9.29G 0.03261 0.003976 0 598 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.67s/it] all 1267 1569 0.55 0.46 0.451 0.133 Epoch gpu_mem box obj cls labels img_size 101/299 9.29G 0.03231 0.004059 0 572 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1569 0.497 0.457 0.425 0.128 Epoch gpu_mem box obj cls labels img_size 102/299 9.29G 0.03252 0.004099 0 608 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.538 0.475 0.458 0.137 Epoch gpu_mem box obj cls labels img_size 103/299 9.29G 0.03229 0.004091 0 551 256: 100% 20/20 [00:13<00:00, 1.53it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.68s/it] all 1267 1569 0.551 0.459 0.444 0.134 Epoch gpu_mem box obj cls labels img_size 104/299 9.29G 0.03241 0.00399 0 492 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.82s/it] all 1267 1569 0.525 0.458 0.444 0.131 Epoch gpu_mem box obj cls labels img_size 105/299 9.29G 0.03243 0.004092 0 646 256: 100% 20/20 [00:12<00:00, 1.54it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:04<00:00, 1.63s/it] all 1267 1569 0.515 0.478 0.446 0.128 Epoch gpu_mem box obj cls labels img_size 106/299 9.29G 0.03241 0.004052 0 606 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.70s/it] all 1267 1569 0.542 0.481 0.457 0.132 Epoch gpu_mem box obj cls labels img_size 107/299 9.29G 0.03229 0.004092 0 637 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.541 0.451 0.447 0.13 Epoch gpu_mem box obj cls labels img_size 108/299 9.29G 0.03245 0.004017 0 538 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.72s/it] all 1267 1569 0.534 0.439 0.429 0.127 Epoch gpu_mem box obj cls labels img_size 109/299 9.29G 0.03233 0.004015 0 570 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.506 0.507 0.454 0.139 Epoch gpu_mem box obj cls labels img_size 110/299 9.29G 0.03242 0.004006 0 545 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1569 0.477 0.47 0.419 0.128 Epoch gpu_mem box obj cls labels img_size 111/299 9.29G 0.03226 0.004038 0 644 256: 100% 20/20 [00:12<00:00, 1.57it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:04<00:00, 1.67s/it] all 1267 1569 0.542 0.451 0.434 0.121 Epoch gpu_mem box obj cls labels img_size 112/299 9.29G 0.03226 0.004042 0 596 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.75s/it] all 1267 1569 0.52 0.47 0.453 0.135 Epoch gpu_mem box obj cls labels img_size 113/299 9.29G 0.0323 0.003965 0 599 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.542 0.481 0.468 0.137 Epoch gpu_mem box obj cls labels img_size 114/299 9.29G 0.03249 0.004109 0 557 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1569 0.598 0.426 0.45 0.128 Epoch gpu_mem box obj cls labels img_size 115/299 9.29G 0.03215 0.004029 0 577 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.85s/it] all 1267 1569 0.572 0.428 0.441 0.132 Epoch gpu_mem box obj cls labels img_size 116/299 9.29G 0.03209 0.004073 0 591 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1569 0.576 0.442 0.456 0.135 Epoch gpu_mem box obj cls labels img_size 117/299 9.29G 0.03218 0.004029 0 610 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.94s/it] all 1267 1569 0.517 0.493 0.465 0.136 Epoch gpu_mem box obj cls labels img_size 118/299 9.29G 0.03217 0.004004 0 615 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.94s/it] all 1267 1569 0.497 0.469 0.424 0.127 Epoch gpu_mem box obj cls labels img_size 119/299 9.29G 0.03214 0.004015 0 553 256: 100% 20/20 [00:12<00:00, 1.64it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.95s/it] all 1267 1569 0.5 0.512 0.462 0.132 Epoch gpu_mem box obj cls labels img_size 120/299 9.29G 0.03201 0.004064 0 555 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.84s/it] all 1267 1569 0.564 0.453 0.449 0.134 Epoch gpu_mem box obj cls labels img_size 121/299 9.29G 0.03228 0.004088 0 533 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.76s/it] all 1267 1569 0.526 0.487 0.449 0.137 Epoch gpu_mem box obj cls labels img_size 122/299 9.29G 0.03206 0.004012 0 585 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.76s/it] all 1267 1569 0.553 0.45 0.46 0.135 Epoch gpu_mem box obj cls labels img_size 123/299 9.29G 0.03252 0.004057 0 625 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.84s/it] all 1267 1569 0.56 0.446 0.434 0.131 Epoch gpu_mem box obj cls labels img_size 124/299 9.29G 0.03202 0.004022 0 590 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.76s/it] all 1267 1569 0.524 0.442 0.434 0.129 Epoch gpu_mem box obj cls labels img_size 125/299 9.29G 0.03216 0.004053 0 560 256: 100% 20/20 [00:12<00:00, 1.56it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.80s/it] all 1267 1569 0.507 0.468 0.432 0.129 Epoch gpu_mem box obj cls labels img_size 126/299 9.29G 0.03226 0.004051 0 618 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.539 0.461 0.444 0.133 Epoch gpu_mem box obj cls labels img_size 127/299 9.29G 0.03195 0.003987 0 531 256: 100% 20/20 [00:12<00:00, 1.60it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.79s/it] all 1267 1569 0.479 0.498 0.443 0.134 Epoch gpu_mem box obj cls labels img_size 128/299 9.29G 0.03212 0.00403 0 610 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.88s/it] all 1267 1569 0.477 0.507 0.441 0.134 Epoch gpu_mem box obj cls labels img_size 129/299 9.29G 0.03218 0.004028 0 599 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.556 0.452 0.448 0.132 Epoch gpu_mem box obj cls labels img_size 130/299 9.29G 0.03197 0.004019 0 588 256: 100% 20/20 [00:12<00:00, 1.62it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.73s/it] all 1267 1569 0.56 0.462 0.447 0.131 Epoch gpu_mem box obj cls labels img_size 131/299 9.29G 0.03197 0.003988 0 530 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.81s/it] all 1267 1569 0.517 0.474 0.436 0.131 Epoch gpu_mem box obj cls labels img_size 132/299 9.29G 0.03195 0.004081 0 557 256: 100% 20/20 [00:12<00:00, 1.55it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.526 0.459 0.432 0.129 Epoch gpu_mem box obj cls labels img_size 133/299 9.29G 0.03211 0.004069 0 617 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.67s/it] all 1267 1569 0.518 0.429 0.406 0.112 Epoch gpu_mem box obj cls labels img_size 134/299 9.29G 0.03206 0.004014 0 582 256: 100% 20/20 [00:12<00:00, 1.59it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.77s/it] all 1267 1569 0.515 0.479 0.44 0.131 Epoch gpu_mem box obj cls labels img_size 135/299 9.29G 0.03222 0.004055 0 643 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1569 0.534 0.468 0.458 0.136 Epoch gpu_mem box obj cls labels img_size 136/299 9.29G 0.03207 0.004063 0 628 256: 100% 20/20 [00:12<00:00, 1.61it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.90s/it] all 1267 1569 0.51 0.478 0.446 0.137 Epoch gpu_mem box obj cls labels img_size 137/299 9.29G 0.03215 0.003972 0 593 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.86s/it] all 1267 1569 0.501 0.484 0.445 0.13 Epoch gpu_mem box obj cls labels img_size 138/299 9.29G 0.03215 0.004032 0 609 256: 100% 20/20 [00:12<00:00, 1.63it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.78s/it] all 1267 1569 0.537 0.46 0.441 0.13 Epoch gpu_mem box obj cls labels img_size 139/299 9.29G 0.0321 0.003988 0 613 256: 100% 20/20 [00:12<00:00, 1.58it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 3/3 [00:05<00:00, 1.74s/it] all 1267 1569 0.537 0.453 0.45 0.13 EarlyStopping patience 30 exceeded, stopping training. 140 epochs completed in 0.726 hours. Optimizer stripped from runs/train/project-yolov5s-256-300-fold-1/weights/last.pt, 14.3MB Optimizer stripped from runs/train/project-yolov5s-256-300-fold-1/weights/best.pt, 14.3MB wandb: Waiting for W&B process to finish, PID 1149 wandb: Program ended successfully. wandb: wandb: Find user logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_001434-1ymi2p3h/logs/debug.log wandb: Find internal logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_001434-1ymi2p3h/logs/debug-internal.log wandb: Run summary: wandb: train/box_loss 0.0321 wandb: train/obj_loss 0.00399 wandb: train/cls_loss 0.0 wandb: metrics/precision 0.53719 wandb: metrics/recall 0.45275 wandb: metrics/mAP_0.5 0.44994 wandb: metrics/mAP_0.5:0.95 0.12985 wandb: val/box_loss 0.03324 wandb: val/obj_loss 0.00191 wandb: val/cls_loss 0.0 wandb: x/lr0 0.00197 wandb: x/lr1 0.00197 wandb: x/lr2 0.00197 wandb: _runtime 2632 wandb: _timestamp 1630198706 wandb: _step 140 wandb: Run history: wandb: train/box_loss █▇▅▄▃▃▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: train/obj_loss ▁▃▆█▇▆▅▅▄▄▄▃▃▃▃▃▃▃▃▂▃▂▂▂▂▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂ wandb: train/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: metrics/precision ▁▁▁▂▃▃▃▅▅▅▆▇▆▇▇▆▇▇▇▇▇█▇█▇███▇█▇▇▇▇▇█▇▇▇▇ wandb: metrics/recall ▁▃▂▄▆▆▇▆▆▆▆▆▇▇▇▇▇▇▇▇▇▇▇▆▇▇▇▇▇▇▇████▇██▇▇ wandb: metrics/mAP_0.5 ▁▁▁▂▂▂▃▄▅▅▆▆▆▇▇▆▇▇▇█▇██▇▇███▇██████▇████ wandb: metrics/mAP_0.5:0.95 ▁▁▁▂▂▂▃▄▄▄▅▆▅▆▆▆▇▇▇▇▇▇█▇▇█▇█▇███████████ wandb: val/box_loss █▇▅▃▃▃▂▂▂▂▂▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/obj_loss ▅▆▇█▇▆▅▅▄▃▃▃▃▂▂▂▂▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: x/lr0 ▁▁▂▃▃▃▄▅▅▆▆▇▇████████▇▇▇▇▇▇▇▇▇▇▆▆▆▆▆▆▆▆▅ wandb: x/lr1 ▁▁▂▃▃▃▄▅▅▆▆▇▇████████▇▇▇▇▇▇▇▇▇▇▆▆▆▆▆▆▆▆▅ wandb: x/lr2 ██▇▇▆▆▅▅▄▄▃▃▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: _runtime ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇███ wandb: _timestamp ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇███ wandb: _step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇███ wandb: wandb: Synced 5 W&B file(s), 134 media file(s), 1 artifact file(s) and 0 other file(s) wandb: wandb: Synced project-yolov5s-256-300-fold-1: https://wandb.ai/itamardvir/YOLOv5/runs/1ymi2p3h Results saved to runs/train/project-yolov5s-256-300-fold-1 train: weights=yolov5m.pt, cfg=, data=/content/yolo-ds/config_fold_2.yaml, hyp=data/hyps/hyp.finetune.yaml, epochs=200, batch_size=128, imgsz=256, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, entity=None, name=project-yolov5m-128-200-fold-2, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0, patience=30 github: up to date with https://github.com/ultralytics/yolov5 ✅ YOLOv5 🚀 2021-8-28 torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) hyperparameters: lr0=0.0032, lrf=0.12, momentum=0.843, weight_decay=0.00036, warmup_epochs=2.0, warmup_momentum=0.5, warmup_bias_lr=0.05, box=0.0296, cls=0.243, cls_pw=0.631, obj=0.301, obj_pw=0.911, iou_t=0.2, anchor_t=2.91, fl_gamma=0.0, hsv_h=0.0138, hsv_s=0.664, hsv_v=0.464, degrees=0.373, translate=0.245, scale=0.898, shear=0.602, perspective=0.0, flipud=0.00856, fliplr=0.5, mosaic=1.0, mixup=0.243, copy_paste=0.0 TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ wandb: Currently logged in as: itamardvir (use `wandb login --relogin` to force relogin) wandb: Tracking run with wandb version 0.12.1 wandb: Syncing run project-yolov5m-128-200-fold-2 wandb: ⭐️ View project at https://wandb.ai/itamardvir/YOLOv5 wandb: 🚀 View run at https://wandb.ai/itamardvir/YOLOv5/runs/2kj1r6zv wandb: Run data is saved locally in /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_005837-2kj1r6zv wandb: Run `wandb offline` to turn off syncing. Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5m.pt to yolov5m.pt... 100% 41.1M/41.1M [00:00<00:00, 76.7MB/s] Overriding model.yaml nc=80 with nc=1 from n params module arguments 0 -1 1 5280 models.common.Focus [3, 48, 3] 1 -1 1 41664 models.common.Conv [48, 96, 3, 2] 2 -1 2 65280 models.common.C3 [96, 96, 2] 3 -1 1 166272 models.common.Conv [96, 192, 3, 2] 4 -1 6 629760 models.common.C3 [192, 192, 6] 5 -1 1 664320 models.common.Conv [192, 384, 3, 2] 6 -1 6 2512896 models.common.C3 [384, 384, 6] 7 -1 1 2655744 models.common.Conv [384, 768, 3, 2] 8 -1 1 1476864 models.common.SPP [768, 768, [5, 9, 13]] 9 -1 2 4134912 models.common.C3 [768, 768, 2, False] 10 -1 1 295680 models.common.Conv [768, 384, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 2 1182720 models.common.C3 [768, 384, 2, False] 14 -1 1 74112 models.common.Conv [384, 192, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 2 296448 models.common.C3 [384, 192, 2, False] 18 -1 1 332160 models.common.Conv [192, 192, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 2 1035264 models.common.C3 [384, 384, 2, False] 21 -1 1 1327872 models.common.Conv [384, 384, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 2 4134912 models.common.C3 [768, 768, 2, False] 24 [17, 20, 23] 1 24246 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [192, 384, 768]] Model Summary: 391 layers, 21056406 parameters, 21056406 gradients, 50.4 GFLOPs Transferred 500/506 items from yolov5m.pt Scaled weight_decay = 0.00072 optimizer: SGD with parameter groups 83 weight, 86 weight (no decay), 86 bias albumentations: Blur(always_apply=False, p=0.1, blur_limit=(3, 7)), MedianBlur(always_apply=False, p=0.1, blur_limit=(3, 7)), ToGray(always_apply=False, p=0.01) train: Scanning '/content/yolo-ds/train_fold_2' images and labels...5067 found, 0 missing, 1637 empty, 0 corrupted: 100% 5067/5067 [00:00<00:00, 6117.81it/s] train: New cache created: /content/yolo-ds/train_fold_2.cache val: Scanning '/content/yolo-ds/val_fold_2' images and labels...1267 found, 0 missing, 403 empty, 0 corrupted: 100% 1267/1267 [00:00<00:00, 2476.54it/s] val: New cache created: /content/yolo-ds/val_fold_2.cache [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) Plotting labels... autoanchor: Analyzing anchors... anchors/target = 4.37, Best Possible Recall (BPR) = 0.9998 Image sizes 256 train, 256 val Using 4 dataloader workers Logging results to runs/train/project-yolov5m-128-200-fold-2 Starting training for 200 epochs... Epoch gpu_mem box obj cls labels img_size 0/199 7.19G 0.06152 0.004039 0 250 256: 100% 40/40 [00:30<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.26s/it] all 1267 1560 0.0055 0.112 0.00338 0.000586 Epoch gpu_mem box obj cls labels img_size 1/199 8.21G 0.05855 0.004331 0 199 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.23s/it] all 1267 1560 0.00521 0.182 0.00327 0.000522 Epoch gpu_mem box obj cls labels img_size 2/199 8.21G 0.05619 0.004541 0 203 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.22s/it] all 1267 1560 0.00734 0.161 0.00456 0.000766 Epoch gpu_mem box obj cls labels img_size 3/199 8.21G 0.05401 0.004836 0 228 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.22s/it] all 1267 1560 0.0131 0.0455 0.00534 0.000973 Epoch gpu_mem box obj cls labels img_size 4/199 8.21G 0.05153 0.004951 0 206 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.20s/it] all 1267 1560 0.0211 0.138 0.0098 0.00186 Epoch gpu_mem box obj cls labels img_size 5/199 8.21G 0.04976 0.005314 0 234 256: 100% 40/40 [00:27<00:00, 1.43it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.22s/it] all 1267 1560 0.0303 0.119 0.0132 0.00254 Epoch gpu_mem box obj cls labels img_size 6/199 8.21G 0.04786 0.005293 0 259 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.21s/it] all 1267 1560 0.0738 0.163 0.0352 0.00682 Epoch gpu_mem box obj cls labels img_size 7/199 8.21G 0.04583 0.005387 0 234 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.21s/it] all 1267 1560 0.12 0.274 0.0736 0.0163 Epoch gpu_mem box obj cls labels img_size 8/199 8.21G 0.04355 0.005483 0 197 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.21s/it] all 1267 1560 0.131 0.322 0.0887 0.0205 Epoch gpu_mem box obj cls labels img_size 9/199 8.21G 0.04189 0.005519 0 219 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:06<00:00, 1.20s/it] all 1267 1560 0.16 0.392 0.117 0.0284 Epoch gpu_mem box obj cls labels img_size 10/199 8.21G 0.04055 0.005494 0 276 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.20s/it] all 1267 1560 0.148 0.421 0.112 0.0244 Epoch gpu_mem box obj cls labels img_size 11/199 8.21G 0.03985 0.0054 0 185 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.19s/it] all 1267 1560 0.148 0.383 0.111 0.0233 Epoch gpu_mem box obj cls labels img_size 12/199 8.21G 0.03886 0.005191 0 198 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.15s/it] all 1267 1560 0.159 0.409 0.122 0.0243 Epoch gpu_mem box obj cls labels img_size 13/199 8.21G 0.03846 0.005006 0 229 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.14s/it] all 1267 1560 0.204 0.392 0.151 0.0327 Epoch gpu_mem box obj cls labels img_size 14/199 8.21G 0.038 0.004902 0 229 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.11s/it] all 1267 1560 0.215 0.459 0.18 0.0406 Epoch gpu_mem box obj cls labels img_size 15/199 8.21G 0.03774 0.004851 0 269 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.06s/it] all 1267 1560 0.267 0.442 0.218 0.0506 Epoch gpu_mem box obj cls labels img_size 16/199 8.21G 0.0373 0.0048 0 220 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.04s/it] all 1267 1560 0.319 0.393 0.245 0.0598 Epoch gpu_mem box obj cls labels img_size 17/199 8.21G 0.03704 0.004592 0 216 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.03s/it] all 1267 1560 0.351 0.392 0.273 0.0668 Epoch gpu_mem box obj cls labels img_size 18/199 8.21G 0.03677 0.004651 0 212 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.03s/it] all 1267 1560 0.387 0.36 0.287 0.0694 Epoch gpu_mem box obj cls labels img_size 19/199 8.21G 0.03638 0.004598 0 235 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.10s/it] all 1267 1560 0.404 0.41 0.32 0.0849 Epoch gpu_mem box obj cls labels img_size 20/199 8.21G 0.03636 0.004576 0 231 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.01it/s] all 1267 1560 0.365 0.492 0.323 0.0889 Epoch gpu_mem box obj cls labels img_size 21/199 8.21G 0.03594 0.00448 0 246 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.02it/s] all 1267 1560 0.435 0.422 0.349 0.0934 Epoch gpu_mem box obj cls labels img_size 22/199 8.21G 0.03575 0.004437 0 231 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.01it/s] all 1267 1560 0.475 0.426 0.389 0.102 Epoch gpu_mem box obj cls labels img_size 23/199 8.21G 0.03559 0.004358 0 228 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.00it/s] all 1267 1560 0.483 0.433 0.383 0.112 Epoch gpu_mem box obj cls labels img_size 24/199 8.21G 0.03528 0.004352 0 187 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.04it/s] all 1267 1560 0.519 0.445 0.411 0.119 Epoch gpu_mem box obj cls labels img_size 25/199 8.21G 0.03488 0.004316 0 234 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.04it/s] all 1267 1560 0.49 0.46 0.403 0.118 Epoch gpu_mem box obj cls labels img_size 26/199 8.21G 0.03483 0.004235 0 214 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.477 0.497 0.424 0.126 Epoch gpu_mem box obj cls labels img_size 27/199 8.21G 0.03465 0.004262 0 226 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.522 0.457 0.428 0.124 Epoch gpu_mem box obj cls labels img_size 28/199 8.21G 0.0345 0.004259 0 193 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.04it/s] all 1267 1560 0.526 0.462 0.438 0.127 Epoch gpu_mem box obj cls labels img_size 29/199 8.21G 0.03436 0.004218 0 203 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.03it/s] all 1267 1560 0.553 0.454 0.433 0.13 Epoch gpu_mem box obj cls labels img_size 30/199 8.21G 0.03412 0.004232 0 208 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.509 0.474 0.428 0.128 Epoch gpu_mem box obj cls labels img_size 31/199 8.21G 0.03435 0.004201 0 207 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.557 0.469 0.436 0.13 Epoch gpu_mem box obj cls labels img_size 32/199 8.21G 0.03416 0.004233 0 226 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.487 0.477 0.429 0.123 Epoch gpu_mem box obj cls labels img_size 33/199 8.21G 0.03394 0.004133 0 205 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.53 0.49 0.447 0.135 Epoch gpu_mem box obj cls labels img_size 34/199 8.21G 0.0339 0.004223 0 198 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.527 0.477 0.424 0.128 Epoch gpu_mem box obj cls labels img_size 35/199 8.21G 0.0337 0.004207 0 197 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.554 0.48 0.449 0.13 Epoch gpu_mem box obj cls labels img_size 36/199 8.21G 0.03364 0.004177 0 248 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.525 0.468 0.432 0.125 Epoch gpu_mem box obj cls labels img_size 37/199 8.21G 0.03354 0.004164 0 176 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.567 0.489 0.456 0.137 Epoch gpu_mem box obj cls labels img_size 38/199 8.21G 0.03356 0.004066 0 189 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.557 0.473 0.455 0.136 Epoch gpu_mem box obj cls labels img_size 39/199 8.21G 0.0335 0.004154 0 216 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.06s/it] all 1267 1560 0.512 0.509 0.446 0.134 Epoch gpu_mem box obj cls labels img_size 40/199 8.21G 0.0334 0.0041 0 214 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.04it/s] all 1267 1560 0.532 0.447 0.426 0.131 Epoch gpu_mem box obj cls labels img_size 41/199 8.21G 0.03329 0.004096 0 248 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.531 0.404 0.385 0.114 Epoch gpu_mem box obj cls labels img_size 42/199 8.21G 0.03321 0.004045 0 213 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.04it/s] all 1267 1560 0.548 0.485 0.456 0.144 Epoch gpu_mem box obj cls labels img_size 43/199 8.21G 0.03301 0.004129 0 244 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.55 0.481 0.457 0.143 Epoch gpu_mem box obj cls labels img_size 44/199 8.21G 0.03322 0.004102 0 214 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.559 0.451 0.444 0.133 Epoch gpu_mem box obj cls labels img_size 45/199 8.21G 0.03305 0.00414 0 224 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.584 0.466 0.464 0.148 Epoch gpu_mem box obj cls labels img_size 46/199 8.21G 0.03307 0.004097 0 201 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.559 0.485 0.477 0.146 Epoch gpu_mem box obj cls labels img_size 47/199 8.21G 0.03282 0.004029 0 215 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.562 0.478 0.468 0.142 Epoch gpu_mem box obj cls labels img_size 48/199 8.21G 0.03277 0.004127 0 237 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.549 0.476 0.465 0.146 Epoch gpu_mem box obj cls labels img_size 49/199 8.21G 0.03298 0.004105 0 242 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.523 0.456 0.432 0.134 Epoch gpu_mem box obj cls labels img_size 50/199 8.21G 0.03294 0.004069 0 215 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.543 0.481 0.463 0.148 Epoch gpu_mem box obj cls labels img_size 51/199 8.21G 0.03289 0.004046 0 204 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.5 0.517 0.456 0.143 Epoch gpu_mem box obj cls labels img_size 52/199 8.21G 0.03287 0.004022 0 188 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.551 0.474 0.453 0.14 Epoch gpu_mem box obj cls labels img_size 53/199 8.21G 0.0329 0.004096 0 234 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.562 0.5 0.476 0.147 Epoch gpu_mem box obj cls labels img_size 54/199 8.21G 0.0325 0.004122 0 227 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.568 0.487 0.485 0.151 Epoch gpu_mem box obj cls labels img_size 55/199 8.21G 0.03274 0.003989 0 205 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.553 0.484 0.467 0.148 Epoch gpu_mem box obj cls labels img_size 56/199 8.21G 0.03245 0.003968 0 191 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.532 0.474 0.455 0.147 Epoch gpu_mem box obj cls labels img_size 57/199 8.21G 0.03269 0.004018 0 204 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.552 0.494 0.467 0.146 Epoch gpu_mem box obj cls labels img_size 58/199 8.21G 0.03255 0.004024 0 221 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.04it/s] all 1267 1560 0.543 0.469 0.441 0.14 Epoch gpu_mem box obj cls labels img_size 59/199 8.21G 0.0325 0.004077 0 258 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.06s/it] all 1267 1560 0.58 0.441 0.443 0.142 Epoch gpu_mem box obj cls labels img_size 60/199 8.21G 0.03245 0.004129 0 258 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.04it/s] all 1267 1560 0.553 0.494 0.487 0.149 Epoch gpu_mem box obj cls labels img_size 61/199 8.21G 0.03252 0.004053 0 196 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.598 0.478 0.482 0.147 Epoch gpu_mem box obj cls labels img_size 62/199 8.21G 0.0328 0.004031 0 218 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.567 0.476 0.471 0.146 Epoch gpu_mem box obj cls labels img_size 63/199 8.21G 0.03247 0.004036 0 232 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.58 0.478 0.474 0.15 Epoch gpu_mem box obj cls labels img_size 64/199 8.21G 0.03243 0.004002 0 256 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.04it/s] all 1267 1560 0.55 0.497 0.478 0.15 Epoch gpu_mem box obj cls labels img_size 65/199 8.21G 0.03225 0.00401 0 207 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.602 0.462 0.471 0.149 Epoch gpu_mem box obj cls labels img_size 66/199 8.21G 0.03225 0.00405 0 193 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.495 0.517 0.467 0.151 Epoch gpu_mem box obj cls labels img_size 67/199 8.21G 0.03224 0.004039 0 221 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.554 0.485 0.471 0.154 Epoch gpu_mem box obj cls labels img_size 68/199 8.21G 0.03226 0.0041 0 210 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.564 0.47 0.466 0.146 Epoch gpu_mem box obj cls labels img_size 69/199 8.21G 0.03235 0.004009 0 216 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.04it/s] all 1267 1560 0.605 0.46 0.484 0.151 Epoch gpu_mem box obj cls labels img_size 70/199 8.21G 0.0321 0.003964 0 188 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.567 0.472 0.467 0.146 Epoch gpu_mem box obj cls labels img_size 71/199 8.21G 0.03247 0.003996 0 216 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.547 0.501 0.476 0.147 Epoch gpu_mem box obj cls labels img_size 72/199 8.21G 0.03228 0.004039 0 230 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.56 0.479 0.458 0.146 Epoch gpu_mem box obj cls labels img_size 73/199 8.21G 0.03221 0.003985 0 205 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.03it/s] all 1267 1560 0.588 0.457 0.479 0.149 Epoch gpu_mem box obj cls labels img_size 74/199 8.21G 0.0323 0.003981 0 227 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.527 0.482 0.451 0.146 Epoch gpu_mem box obj cls labels img_size 75/199 8.21G 0.03201 0.003948 0 194 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.565 0.461 0.471 0.151 Epoch gpu_mem box obj cls labels img_size 76/199 8.21G 0.03219 0.003959 0 200 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.582 0.464 0.472 0.147 Epoch gpu_mem box obj cls labels img_size 77/199 8.21G 0.03192 0.003953 0 204 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.573 0.483 0.485 0.153 Epoch gpu_mem box obj cls labels img_size 78/199 8.21G 0.03207 0.004013 0 247 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.554 0.468 0.456 0.146 Epoch gpu_mem box obj cls labels img_size 79/199 8.21G 0.03194 0.003929 0 230 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.02s/it] all 1267 1560 0.586 0.44 0.453 0.146 Epoch gpu_mem box obj cls labels img_size 80/199 8.21G 0.03191 0.004054 0 217 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.553 0.471 0.456 0.145 Epoch gpu_mem box obj cls labels img_size 81/199 8.21G 0.03191 0.003996 0 243 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.598 0.453 0.46 0.143 Epoch gpu_mem box obj cls labels img_size 82/199 8.21G 0.03218 0.004011 0 187 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.555 0.497 0.478 0.15 Epoch gpu_mem box obj cls labels img_size 83/199 8.21G 0.03199 0.003956 0 233 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.546 0.481 0.464 0.149 Epoch gpu_mem box obj cls labels img_size 84/199 8.21G 0.03181 0.004004 0 219 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.593 0.46 0.472 0.151 Epoch gpu_mem box obj cls labels img_size 85/199 8.21G 0.03189 0.003999 0 274 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.539 0.513 0.473 0.147 Epoch gpu_mem box obj cls labels img_size 86/199 8.21G 0.03158 0.003969 0 224 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.548 0.48 0.469 0.151 Epoch gpu_mem box obj cls labels img_size 87/199 8.21G 0.03215 0.004045 0 227 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.02it/s] all 1267 1560 0.519 0.502 0.461 0.147 Epoch gpu_mem box obj cls labels img_size 88/199 8.21G 0.03175 0.003938 0 230 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.545 0.474 0.456 0.143 Epoch gpu_mem box obj cls labels img_size 89/199 8.21G 0.03185 0.004012 0 191 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.565 0.47 0.471 0.148 Epoch gpu_mem box obj cls labels img_size 90/199 8.21G 0.03184 0.003975 0 258 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.561 0.468 0.463 0.147 Epoch gpu_mem box obj cls labels img_size 91/199 8.21G 0.03186 0.003925 0 191 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.522 0.485 0.456 0.146 Epoch gpu_mem box obj cls labels img_size 92/199 8.21G 0.03184 0.003963 0 227 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.503 0.517 0.464 0.149 Epoch gpu_mem box obj cls labels img_size 93/199 8.21G 0.03171 0.003925 0 190 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.03it/s] all 1267 1560 0.55 0.492 0.468 0.15 Epoch gpu_mem box obj cls labels img_size 94/199 8.21G 0.03167 0.003938 0 213 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.577 0.472 0.476 0.154 Epoch gpu_mem box obj cls labels img_size 95/199 8.21G 0.03166 0.004059 0 219 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.52 0.513 0.478 0.152 Epoch gpu_mem box obj cls labels img_size 96/199 8.21G 0.03164 0.003962 0 197 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.03it/s] all 1267 1560 0.545 0.478 0.463 0.149 Epoch gpu_mem box obj cls labels img_size 97/199 8.21G 0.03163 0.003968 0 206 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.579 0.483 0.477 0.151 Epoch gpu_mem box obj cls labels img_size 98/199 8.21G 0.03166 0.003934 0 195 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.553 0.481 0.465 0.15 Epoch gpu_mem box obj cls labels img_size 99/199 8.21G 0.03153 0.003963 0 208 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:05<00:00, 1.02s/it] all 1267 1560 0.586 0.465 0.473 0.152 Epoch gpu_mem box obj cls labels img_size 100/199 8.21G 0.03153 0.003966 0 229 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.528 0.515 0.469 0.153 Epoch gpu_mem box obj cls labels img_size 101/199 8.21G 0.03138 0.003908 0 170 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.548 0.488 0.464 0.149 Epoch gpu_mem box obj cls labels img_size 102/199 8.21G 0.03142 0.003994 0 204 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.552 0.505 0.477 0.152 Epoch gpu_mem box obj cls labels img_size 103/199 8.21G 0.03151 0.003958 0 225 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.581 0.494 0.487 0.152 Epoch gpu_mem box obj cls labels img_size 104/199 8.21G 0.03136 0.003955 0 208 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.596 0.465 0.469 0.149 Epoch gpu_mem box obj cls labels img_size 105/199 8.21G 0.03159 0.003891 0 188 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.05it/s] all 1267 1560 0.548 0.489 0.472 0.15 Epoch gpu_mem box obj cls labels img_size 106/199 8.21G 0.03165 0.003987 0 233 256: 100% 40/40 [00:27<00:00, 1.45it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.525 0.518 0.479 0.15 Epoch gpu_mem box obj cls labels img_size 107/199 8.21G 0.03137 0.003981 0 231 256: 100% 40/40 [00:27<00:00, 1.44it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 5/5 [00:04<00:00, 1.06it/s] all 1267 1560 0.553 0.474 0.459 0.148 EarlyStopping patience 30 exceeded, stopping training. 108 epochs completed in 1.017 hours. Optimizer stripped from runs/train/project-yolov5m-128-200-fold-2/weights/last.pt, 42.4MB Optimizer stripped from runs/train/project-yolov5m-128-200-fold-2/weights/best.pt, 42.4MB wandb: Waiting for W&B process to finish, PID 1458 wandb: Program ended successfully. wandb: wandb: Find user logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_005837-2kj1r6zv/logs/debug.log wandb: Find internal logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_005837-2kj1r6zv/logs/debug-internal.log wandb: Run summary: wandb: train/box_loss 0.03137 wandb: train/obj_loss 0.00398 wandb: train/cls_loss 0.0 wandb: metrics/precision 0.55289 wandb: metrics/recall 0.47403 wandb: metrics/mAP_0.5 0.45891 wandb: metrics/mAP_0.5:0.95 0.14822 wandb: val/box_loss 0.03203 wandb: val/obj_loss 0.00187 wandb: val/cls_loss 0.0 wandb: x/lr0 0.00166 wandb: x/lr1 0.00166 wandb: x/lr2 0.00166 wandb: _runtime 3680 wandb: _timestamp 1630202398 wandb: _step 108 wandb: Run history: wandb: train/box_loss █▇▅▄▃▃▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: train/obj_loss ▂▄▇██▆▅▄▄▃▃▂▂▂▂▂▂▂▂▂▂▁▂▂▁▂▁▁▁▁▁▁▂▁▁▁▁▁▁▁ wandb: train/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: metrics/precision ▁▁▁▂▃▃▅▆▆▇▇▇▇▇▇▇▇▇▇▇█▇▇███▇▇██▇▇▇█▇▇▇▇█▇ wandb: metrics/recall ▁▂▁▅▆▆▆▆▆▇▇▇▇▇▇▆▇█▇▇███▇▇▇█▇▇▇███▇█▇▇█▇▇ wandb: metrics/mAP_0.5 ▁▁▁▂▃▃▄▆▆▇▇▇▇▇█▇██▇████████▇████████████ wandb: metrics/mAP_0.5:0.95 ▁▁▁▂▂▃▄▅▅▇▇▇▇▇▇▆██▇▇████████████████████ wandb: val/box_loss █▆▅▃▃▂▂▂▂▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/obj_loss ▆▇██▇▆▅▄▃▃▂▂▂▂▂▂▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: x/lr0 ▁▂▃▃▄▅▆▇▇████████▇▇▇▇▇▇▇▇▇▆▆▆▆▆▆▆▅▅▅▅▅▅▅ wandb: x/lr1 ▁▂▃▃▄▅▆▇▇████████▇▇▇▇▇▇▇▇▇▆▆▆▆▆▆▆▅▅▅▅▅▅▅ wandb: x/lr2 █▇▇▆▅▄▃▃▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: _runtime ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇▇███ wandb: _timestamp ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇▇███ wandb: _step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ wandb: wandb: Synced 5 W&B file(s), 166 media file(s), 1 artifact file(s) and 0 other file(s) wandb: wandb: Synced project-yolov5m-128-200-fold-2: https://wandb.ai/itamardvir/YOLOv5/runs/2kj1r6zv Results saved to runs/train/project-yolov5m-128-200-fold-2 train: weights=yolov5l.pt, cfg=, data=/content/yolo-ds/config_fold_3.yaml, hyp=data/hyps/hyp.finetune.yaml, epochs=120, batch_size=64, imgsz=256, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, entity=None, name=project-yolov5l-64-120-fold-3, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0, patience=30 github: up to date with https://github.com/ultralytics/yolov5 ✅ YOLOv5 🚀 2021-8-28 torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) hyperparameters: lr0=0.0032, lrf=0.12, momentum=0.843, weight_decay=0.00036, warmup_epochs=2.0, warmup_momentum=0.5, warmup_bias_lr=0.05, box=0.0296, cls=0.243, cls_pw=0.631, obj=0.301, obj_pw=0.911, iou_t=0.2, anchor_t=2.91, fl_gamma=0.0, hsv_h=0.0138, hsv_s=0.664, hsv_v=0.464, degrees=0.373, translate=0.245, scale=0.898, shear=0.602, perspective=0.0, flipud=0.00856, fliplr=0.5, mosaic=1.0, mixup=0.243, copy_paste=0.0 TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ wandb: Currently logged in as: itamardvir (use `wandb login --relogin` to force relogin) wandb: Tracking run with wandb version 0.12.1 wandb: Syncing run project-yolov5l-64-120-fold-3 wandb: ⭐️ View project at https://wandb.ai/itamardvir/YOLOv5 wandb: 🚀 View run at https://wandb.ai/itamardvir/YOLOv5/runs/37ym9m56 wandb: Run data is saved locally in /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_020008-37ym9m56 wandb: Run `wandb offline` to turn off syncing. Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5l.pt to yolov5l.pt... 100% 90.2M/90.2M [00:01<00:00, 48.6MB/s] Overriding model.yaml nc=80 with nc=1 from n params module arguments 0 -1 1 7040 models.common.Focus [3, 64, 3] 1 -1 1 73984 models.common.Conv [64, 128, 3, 2] 2 -1 3 156928 models.common.C3 [128, 128, 3] 3 -1 1 295424 models.common.Conv [128, 256, 3, 2] 4 -1 9 1611264 models.common.C3 [256, 256, 9] 5 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 6 -1 9 6433792 models.common.C3 [512, 512, 9] 7 -1 1 4720640 models.common.Conv [512, 1024, 3, 2] 8 -1 1 2624512 models.common.SPP [1024, 1024, [5, 9, 13]] 9 -1 3 9971712 models.common.C3 [1024, 1024, 3, False] 10 -1 1 525312 models.common.Conv [1024, 512, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 3 2757632 models.common.C3 [1024, 512, 3, False] 14 -1 1 131584 models.common.Conv [512, 256, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 3 690688 models.common.C3 [512, 256, 3, False] 18 -1 1 590336 models.common.Conv [256, 256, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 3 2495488 models.common.C3 [512, 512, 3, False] 21 -1 1 2360320 models.common.Conv [512, 512, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 3 9971712 models.common.C3 [1024, 1024, 3, False] 24 [17, 20, 23] 1 32310 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [256, 512, 1024]] Model Summary: 499 layers, 46631350 parameters, 46631350 gradients, 114.2 GFLOPs Transferred 644/650 items from yolov5l.pt Scaled weight_decay = 0.00036 optimizer: SGD with parameter groups 107 weight, 110 weight (no decay), 110 bias albumentations: Blur(always_apply=False, p=0.1, blur_limit=(3, 7)), MedianBlur(always_apply=False, p=0.1, blur_limit=(3, 7)), ToGray(always_apply=False, p=0.01) train: Scanning '/content/yolo-ds/train_fold_3' images and labels...5067 found, 0 missing, 1641 empty, 0 corrupted: 100% 5067/5067 [00:00<00:00, 6387.01it/s] train: New cache created: /content/yolo-ds/train_fold_3.cache val: Scanning '/content/yolo-ds/val_fold_3' images and labels...1267 found, 0 missing, 399 empty, 0 corrupted: 100% 1267/1267 [00:00<00:00, 2086.97it/s] val: New cache created: /content/yolo-ds/val_fold_3.cache [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) Plotting labels... autoanchor: Analyzing anchors... anchors/target = 4.37, Best Possible Recall (BPR) = 1.0000 Image sizes 256 train, 256 val Using 4 dataloader workers Logging results to runs/train/project-yolov5l-64-120-fold-3 Starting training for 120 epochs... Epoch gpu_mem box obj cls labels img_size 0/119 6.17G 0.06235 0.004014 0 30 256: 100% 80/80 [00:59<00:00, 1.34it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:08<00:00, 1.19it/s] all 1267 1599 0.0102 0.0813 0.00468 0.000799 Epoch gpu_mem box obj cls labels img_size 1/119 6.85G 0.05616 0.00464 0 16 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:08<00:00, 1.20it/s] all 1267 1599 0.0151 0.0419 0.0054 0.000981 Epoch gpu_mem box obj cls labels img_size 2/119 6.85G 0.05223 0.005022 0 30 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:08<00:00, 1.22it/s] all 1267 1599 0.024 0.111 0.0104 0.00183 Epoch gpu_mem box obj cls labels img_size 3/119 6.85G 0.04885 0.005337 0 29 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:08<00:00, 1.21it/s] all 1267 1599 0.0432 0.142 0.0178 0.00332 Epoch gpu_mem box obj cls labels img_size 4/119 6.85G 0.04471 0.005561 0 27 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:08<00:00, 1.23it/s] all 1267 1599 0.113 0.26 0.0573 0.012 Epoch gpu_mem box obj cls labels img_size 5/119 6.85G 0.04134 0.005656 0 29 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:08<00:00, 1.25it/s] all 1267 1599 0.182 0.306 0.12 0.0266 Epoch gpu_mem box obj cls labels img_size 6/119 6.85G 0.03938 0.005556 0 17 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.28it/s] all 1267 1599 0.185 0.371 0.138 0.03 Epoch gpu_mem box obj cls labels img_size 7/119 6.85G 0.03817 0.005097 0 30 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.27it/s] all 1267 1599 0.187 0.308 0.119 0.0241 Epoch gpu_mem box obj cls labels img_size 8/119 6.85G 0.03725 0.004814 0 24 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.35it/s] all 1267 1599 0.286 0.365 0.197 0.0488 Epoch gpu_mem box obj cls labels img_size 9/119 6.85G 0.03694 0.004719 0 30 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.37it/s] all 1267 1599 0.376 0.358 0.254 0.061 Epoch gpu_mem box obj cls labels img_size 10/119 6.85G 0.0364 0.004622 0 30 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.37it/s] all 1267 1599 0.403 0.374 0.28 0.0659 Epoch gpu_mem box obj cls labels img_size 11/119 6.85G 0.03585 0.004473 0 33 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.29it/s] all 1267 1599 0.442 0.412 0.344 0.0958 Epoch gpu_mem box obj cls labels img_size 12/119 6.85G 0.03547 0.004497 0 34 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.37it/s] all 1267 1599 0.448 0.416 0.352 0.0949 Epoch gpu_mem box obj cls labels img_size 13/119 6.85G 0.03489 0.004365 0 41 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.41it/s] all 1267 1599 0.525 0.407 0.384 0.107 Epoch gpu_mem box obj cls labels img_size 14/119 6.85G 0.03453 0.004319 0 28 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.42it/s] all 1267 1599 0.53 0.446 0.413 0.118 Epoch gpu_mem box obj cls labels img_size 15/119 6.85G 0.0342 0.004255 0 32 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.42it/s] all 1267 1599 0.482 0.462 0.395 0.107 Epoch gpu_mem box obj cls labels img_size 16/119 6.85G 0.03437 0.00423 0 26 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.43it/s] all 1267 1599 0.476 0.468 0.397 0.11 Epoch gpu_mem box obj cls labels img_size 17/119 6.85G 0.03392 0.004243 0 33 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.463 0.467 0.396 0.12 Epoch gpu_mem box obj cls labels img_size 18/119 6.85G 0.0339 0.004201 0 31 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.43it/s] all 1267 1599 0.5 0.463 0.409 0.117 Epoch gpu_mem box obj cls labels img_size 19/119 6.85G 0.03361 0.004185 0 18 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.543 0.446 0.434 0.128 Epoch gpu_mem box obj cls labels img_size 20/119 6.85G 0.03359 0.004182 0 38 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.43it/s] all 1267 1599 0.554 0.435 0.419 0.117 Epoch gpu_mem box obj cls labels img_size 21/119 6.85G 0.03349 0.004137 0 35 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.569 0.426 0.423 0.125 Epoch gpu_mem box obj cls labels img_size 22/119 6.85G 0.03329 0.004148 0 36 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.51 0.457 0.405 0.123 Epoch gpu_mem box obj cls labels img_size 23/119 6.85G 0.03317 0.004062 0 30 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.36it/s] all 1267 1599 0.524 0.485 0.444 0.138 Epoch gpu_mem box obj cls labels img_size 24/119 6.85G 0.03316 0.004086 0 28 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.535 0.478 0.442 0.126 Epoch gpu_mem box obj cls labels img_size 25/119 6.85G 0.03289 0.00407 0 27 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.62 0.42 0.444 0.136 Epoch gpu_mem box obj cls labels img_size 26/119 6.85G 0.03284 0.004089 0 28 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.533 0.433 0.413 0.129 Epoch gpu_mem box obj cls labels img_size 27/119 6.85G 0.03264 0.004056 0 40 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.42it/s] all 1267 1599 0.544 0.458 0.435 0.133 Epoch gpu_mem box obj cls labels img_size 28/119 6.85G 0.03275 0.004033 0 43 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.54 0.455 0.429 0.13 Epoch gpu_mem box obj cls labels img_size 29/119 6.85G 0.03277 0.004081 0 25 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.42it/s] all 1267 1599 0.543 0.452 0.439 0.135 Epoch gpu_mem box obj cls labels img_size 30/119 6.85G 0.03276 0.004034 0 32 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.562 0.458 0.446 0.135 Epoch gpu_mem box obj cls labels img_size 31/119 6.85G 0.03243 0.004042 0 25 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.52 0.476 0.448 0.137 Epoch gpu_mem box obj cls labels img_size 32/119 6.85G 0.0326 0.004054 0 26 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.54 0.51 0.461 0.136 Epoch gpu_mem box obj cls labels img_size 33/119 6.85G 0.03243 0.004057 0 30 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.551 0.463 0.456 0.136 Epoch gpu_mem box obj cls labels img_size 34/119 6.85G 0.03236 0.004073 0 40 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.519 0.483 0.448 0.14 Epoch gpu_mem box obj cls labels img_size 35/119 6.85G 0.03228 0.004104 0 38 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.37it/s] all 1267 1599 0.522 0.47 0.436 0.137 Epoch gpu_mem box obj cls labels img_size 36/119 6.85G 0.03231 0.004049 0 31 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.518 0.48 0.437 0.134 Epoch gpu_mem box obj cls labels img_size 37/119 6.85G 0.03244 0.004093 0 42 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.543 0.461 0.437 0.135 Epoch gpu_mem box obj cls labels img_size 38/119 6.85G 0.03216 0.004005 0 24 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.553 0.475 0.453 0.139 Epoch gpu_mem box obj cls labels img_size 39/119 6.85G 0.032 0.003938 0 25 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.525 0.5 0.461 0.135 Epoch gpu_mem box obj cls labels img_size 40/119 6.85G 0.03218 0.003995 0 24 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.522 0.462 0.43 0.123 Epoch gpu_mem box obj cls labels img_size 41/119 6.85G 0.03209 0.004001 0 23 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.551 0.48 0.456 0.139 Epoch gpu_mem box obj cls labels img_size 42/119 6.85G 0.03224 0.003968 0 34 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.551 0.477 0.454 0.135 Epoch gpu_mem box obj cls labels img_size 43/119 6.85G 0.03198 0.003995 0 32 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.562 0.46 0.448 0.141 Epoch gpu_mem box obj cls labels img_size 44/119 6.85G 0.03197 0.004066 0 25 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.556 0.45 0.441 0.133 Epoch gpu_mem box obj cls labels img_size 45/119 6.85G 0.03172 0.004083 0 39 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.513 0.479 0.43 0.134 Epoch gpu_mem box obj cls labels img_size 46/119 6.85G 0.032 0.004042 0 41 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.581 0.469 0.463 0.136 Epoch gpu_mem box obj cls labels img_size 47/119 6.85G 0.03187 0.003952 0 29 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.36it/s] all 1267 1599 0.546 0.48 0.452 0.141 Epoch gpu_mem box obj cls labels img_size 48/119 6.85G 0.03165 0.003956 0 32 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.547 0.478 0.45 0.136 Epoch gpu_mem box obj cls labels img_size 49/119 6.85G 0.0317 0.003898 0 18 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.43it/s] all 1267 1599 0.577 0.473 0.461 0.136 Epoch gpu_mem box obj cls labels img_size 50/119 6.85G 0.03182 0.003964 0 36 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.535 0.471 0.451 0.136 Epoch gpu_mem box obj cls labels img_size 51/119 6.85G 0.0316 0.003992 0 31 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.552 0.472 0.453 0.139 Epoch gpu_mem box obj cls labels img_size 52/119 6.85G 0.03139 0.004007 0 31 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.523 0.491 0.445 0.138 Epoch gpu_mem box obj cls labels img_size 53/119 6.85G 0.03152 0.003935 0 31 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.588 0.448 0.447 0.132 Epoch gpu_mem box obj cls labels img_size 54/119 6.85G 0.03157 0.00394 0 32 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.573 0.445 0.445 0.135 Epoch gpu_mem box obj cls labels img_size 55/119 6.85G 0.03138 0.003934 0 36 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.43it/s] all 1267 1599 0.525 0.488 0.453 0.137 Epoch gpu_mem box obj cls labels img_size 56/119 6.85G 0.03152 0.004003 0 41 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.551 0.457 0.44 0.134 Epoch gpu_mem box obj cls labels img_size 57/119 6.85G 0.03172 0.004018 0 33 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.563 0.468 0.453 0.137 Epoch gpu_mem box obj cls labels img_size 58/119 6.85G 0.03147 0.003942 0 34 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.59 0.453 0.458 0.138 Epoch gpu_mem box obj cls labels img_size 59/119 6.85G 0.0313 0.003931 0 24 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.37it/s] all 1267 1599 0.572 0.453 0.454 0.136 Epoch gpu_mem box obj cls labels img_size 60/119 6.85G 0.0314 0.003961 0 28 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.568 0.464 0.447 0.134 Epoch gpu_mem box obj cls labels img_size 61/119 6.85G 0.03134 0.003978 0 35 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.571 0.453 0.442 0.136 Epoch gpu_mem box obj cls labels img_size 62/119 6.85G 0.03122 0.004006 0 34 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.599 0.443 0.454 0.138 Epoch gpu_mem box obj cls labels img_size 63/119 6.85G 0.03146 0.003923 0 28 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.556 0.466 0.445 0.136 Epoch gpu_mem box obj cls labels img_size 64/119 6.85G 0.03131 0.003821 0 22 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.564 0.472 0.452 0.139 Epoch gpu_mem box obj cls labels img_size 65/119 6.85G 0.03097 0.003842 0 29 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.523 0.488 0.453 0.138 Epoch gpu_mem box obj cls labels img_size 66/119 6.85G 0.03134 0.003868 0 28 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.544 0.471 0.444 0.136 Epoch gpu_mem box obj cls labels img_size 67/119 6.85G 0.03133 0.003925 0 44 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.581 0.457 0.454 0.137 Epoch gpu_mem box obj cls labels img_size 68/119 6.85G 0.03111 0.003949 0 34 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.581 0.441 0.441 0.136 Epoch gpu_mem box obj cls labels img_size 69/119 6.85G 0.03127 0.003941 0 29 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.549 0.471 0.443 0.135 Epoch gpu_mem box obj cls labels img_size 70/119 6.85G 0.03095 0.003977 0 19 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.526 0.479 0.44 0.133 Epoch gpu_mem box obj cls labels img_size 71/119 6.85G 0.03101 0.003974 0 30 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:07<00:00, 1.38it/s] all 1267 1599 0.568 0.451 0.446 0.136 Epoch gpu_mem box obj cls labels img_size 72/119 6.85G 0.031 0.003903 0 32 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.518 0.49 0.444 0.136 Epoch gpu_mem box obj cls labels img_size 73/119 6.85G 0.03097 0.003922 0 37 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.541 0.466 0.443 0.136 Epoch gpu_mem box obj cls labels img_size 74/119 6.85G 0.03112 0.003871 0 47 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.516 0.485 0.448 0.136 Epoch gpu_mem box obj cls labels img_size 75/119 6.85G 0.03104 0.003946 0 47 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.548 0.463 0.448 0.137 Epoch gpu_mem box obj cls labels img_size 76/119 6.85G 0.03105 0.003928 0 36 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.45it/s] all 1267 1599 0.495 0.513 0.449 0.138 Epoch gpu_mem box obj cls labels img_size 77/119 6.85G 0.0309 0.003898 0 26 256: 100% 80/80 [00:56<00:00, 1.42it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:06<00:00, 1.44it/s] all 1267 1599 0.553 0.468 0.446 0.137 EarlyStopping patience 30 exceeded, stopping training. 78 epochs completed in 1.420 hours. Optimizer stripped from runs/train/project-yolov5l-64-120-fold-3/weights/last.pt, 93.7MB Optimizer stripped from runs/train/project-yolov5l-64-120-fold-3/weights/best.pt, 93.7MB wandb: Waiting for W&B process to finish, PID 1806 wandb: Program ended successfully. wandb: wandb: Find user logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_020008-37ym9m56/logs/debug.log wandb: Find internal logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_020008-37ym9m56/logs/debug-internal.log wandb: Run summary: wandb: train/box_loss 0.0309 wandb: train/obj_loss 0.0039 wandb: train/cls_loss 0.0 wandb: metrics/precision 0.55274 wandb: metrics/recall 0.46836 wandb: metrics/mAP_0.5 0.44644 wandb: metrics/mAP_0.5:0.95 0.13711 wandb: val/box_loss 0.03248 wandb: val/obj_loss 0.00193 wandb: val/cls_loss 0.0 wandb: x/lr0 0.00122 wandb: x/lr1 0.00122 wandb: x/lr2 0.00122 wandb: _runtime 5133 wandb: _timestamp 1630207541 wandb: _step 78 wandb: Run history: wandb: train/box_loss █▇▅▃▃▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: train/obj_loss ▂▄▇█▆▄▃▃▃▃▂▂▂▂▂▂▂▂▂▂▁▂▂▂▁▁▂▁▁▂▁▂▁▁▁▁▂▁▁▁ wandb: train/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: metrics/precision ▁▁▁▃▃▅▆▇▆▆▇▇▇█▇▇▇▇▇▇▇▇▇▇▇█▇█▇▇▇▇▇▇█▇▇▇▇▇ wandb: metrics/recall ▂▁▃▅▅▆▇▇▇▇▇▇█▇▇▇█▇█▇██▇████▇██▇▇▇█▇█▇▇▇█ wandb: metrics/mAP_0.5 ▁▁▁▃▃▅▆▇▇▇█▇████████████████████████████ wandb: metrics/mAP_0.5:0.95 ▁▁▁▂▂▄▆▆▆▇▇▇████████████████████████████ wandb: val/box_loss █▇▅▃▃▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/obj_loss ▆██▇▆▄▄▃▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: x/lr0 ▁▂▃▄▅▇████████▇▇▇▇▇▇▇▆▆▆▆▆▆▅▅▅▅▅▅▄▄▄▄▄▄▃ wandb: x/lr1 ▁▂▃▄▅▇████████▇▇▇▇▇▇▇▆▆▆▆▆▆▅▅▅▅▅▅▄▄▄▄▄▄▃ wandb: x/lr2 █▇▆▅▄▃▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: _runtime ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇████ wandb: _timestamp ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇████ wandb: _step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ wandb: wandb: Synced 5 W&B file(s), 198 media file(s), 1 artifact file(s) and 0 other file(s) wandb: wandb: Synced project-yolov5l-64-120-fold-3: https://wandb.ai/itamardvir/YOLOv5/runs/37ym9m56 Results saved to runs/train/project-yolov5l-64-120-fold-3 train: weights=yolov5x.pt, cfg=, data=/content/yolo-ds/config_fold_4.yaml, hyp=data/hyps/hyp.finetune.yaml, epochs=120, batch_size=64, imgsz=256, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, project=runs/train, entity=None, name=project-yolov5x-64-120-fold-4, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0, patience=30 github: up to date with https://github.com/ultralytics/yolov5 ✅ YOLOv5 🚀 2021-8-28 torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) hyperparameters: lr0=0.0032, lrf=0.12, momentum=0.843, weight_decay=0.00036, warmup_epochs=2.0, warmup_momentum=0.5, warmup_bias_lr=0.05, box=0.0296, cls=0.243, cls_pw=0.631, obj=0.301, obj_pw=0.911, iou_t=0.2, anchor_t=2.91, fl_gamma=0.0, hsv_h=0.0138, hsv_s=0.664, hsv_v=0.464, degrees=0.373, translate=0.245, scale=0.898, shear=0.602, perspective=0.0, flipud=0.00856, fliplr=0.5, mosaic=1.0, mixup=0.243, copy_paste=0.0 TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ wandb: Currently logged in as: itamardvir (use `wandb login --relogin` to force relogin) wandb: Tracking run with wandb version 0.12.1 wandb: Syncing run project-yolov5x-64-120-fold-4 wandb: ⭐️ View project at https://wandb.ai/itamardvir/YOLOv5 wandb: 🚀 View run at https://wandb.ai/itamardvir/YOLOv5/runs/1obyrr4c wandb: Run data is saved locally in /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_032551-1obyrr4c wandb: Run `wandb offline` to turn off syncing. Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5x.pt to yolov5x.pt... 100% 168M/168M [00:02<00:00, 79.4MB/s] Overriding model.yaml nc=80 with nc=1 from n params module arguments 0 -1 1 8800 models.common.Focus [3, 80, 3] 1 -1 1 115520 models.common.Conv [80, 160, 3, 2] 2 -1 4 309120 models.common.C3 [160, 160, 4] 3 -1 1 461440 models.common.Conv [160, 320, 3, 2] 4 -1 12 3285760 models.common.C3 [320, 320, 12] 5 -1 1 1844480 models.common.Conv [320, 640, 3, 2] 6 -1 12 13125120 models.common.C3 [640, 640, 12] 7 -1 1 7375360 models.common.Conv [640, 1280, 3, 2] 8 -1 1 4099840 models.common.SPP [1280, 1280, [5, 9, 13]] 9 -1 4 19676160 models.common.C3 [1280, 1280, 4, False] 10 -1 1 820480 models.common.Conv [1280, 640, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 4 5332480 models.common.C3 [1280, 640, 4, False] 14 -1 1 205440 models.common.Conv [640, 320, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 4 1335040 models.common.C3 [640, 320, 4, False] 18 -1 1 922240 models.common.Conv [320, 320, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 4 4922880 models.common.C3 [640, 640, 4, False] 21 -1 1 3687680 models.common.Conv [640, 640, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 4 19676160 models.common.C3 [1280, 1280, 4, False] 24 [17, 20, 23] 1 40374 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [320, 640, 1280]] Model Summary: 607 layers, 87244374 parameters, 87244374 gradients, 217.3 GFLOPs Transferred 788/794 items from yolov5x.pt Scaled weight_decay = 0.00036 optimizer: SGD with parameter groups 131 weight, 134 weight (no decay), 134 bias albumentations: Blur(always_apply=False, p=0.1, blur_limit=(3, 7)), MedianBlur(always_apply=False, p=0.1, blur_limit=(3, 7)), ToGray(always_apply=False, p=0.01) train: Scanning '/content/yolo-ds/train_fold_4' images and labels...5068 found, 0 missing, 1629 empty, 0 corrupted: 100% 5068/5068 [00:01<00:00, 4804.07it/s] train: New cache created: /content/yolo-ds/train_fold_4.cache val: Scanning '/content/yolo-ds/val_fold_4' images and labels...1266 found, 0 missing, 411 empty, 0 corrupted: 100% 1266/1266 [00:00<00:00, 2368.29it/s] val: New cache created: /content/yolo-ds/val_fold_4.cache [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) Plotting labels... autoanchor: Analyzing anchors... anchors/target = 4.37, Best Possible Recall (BPR) = 0.9998 Image sizes 256 train, 256 val Using 4 dataloader workers Logging results to runs/train/project-yolov5x-64-120-fold-4 Starting training for 120 epochs... Epoch gpu_mem box obj cls labels img_size 0/119 9.48G 0.06056 0.003995 0 34 256: 100% 80/80 [01:43<00:00, 1.30s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:12<00:00, 1.27s/it] all 1266 1566 0.00663 0.211 0.00383 0.000678 Epoch gpu_mem box obj cls labels img_size 1/119 10.3G 0.05444 0.004797 0 41 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:12<00:00, 1.29s/it] all 1266 1566 0.00738 0.266 0.00551 0.0015 Epoch gpu_mem box obj cls labels img_size 2/119 10.3G 0.04998 0.005182 0 30 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:13<00:00, 1.30s/it] all 1266 1566 0.0108 0.22 0.00567 0.00107 Epoch gpu_mem box obj cls labels img_size 3/119 10.3G 0.04517 0.005539 0 28 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:13<00:00, 1.31s/it] all 1266 1566 0.135 0.304 0.0682 0.0153 Epoch gpu_mem box obj cls labels img_size 4/119 10.3G 0.0413 0.005806 0 36 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:13<00:00, 1.31s/it] all 1266 1566 0.184 0.363 0.122 0.026 Epoch gpu_mem box obj cls labels img_size 5/119 10.3G 0.03909 0.005367 0 21 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:12<00:00, 1.28s/it] all 1266 1566 0.157 0.377 0.102 0.0213 Epoch gpu_mem box obj cls labels img_size 6/119 10.3G 0.03785 0.005022 0 33 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:12<00:00, 1.25s/it] all 1266 1566 0.251 0.395 0.179 0.0357 Epoch gpu_mem box obj cls labels img_size 7/119 10.3G 0.03733 0.004697 0 30 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:12<00:00, 1.24s/it] all 1266 1566 0.267 0.437 0.2 0.0428 Epoch gpu_mem box obj cls labels img_size 8/119 10.3G 0.03641 0.00474 0 25 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:12<00:00, 1.21s/it] all 1266 1566 0.288 0.355 0.213 0.0431 Epoch gpu_mem box obj cls labels img_size 9/119 10.3G 0.03626 0.004618 0 17 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:12<00:00, 1.20s/it] all 1266 1566 0.381 0.405 0.285 0.0672 Epoch gpu_mem box obj cls labels img_size 10/119 10.3G 0.03577 0.004494 0 39 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.20s/it] all 1266 1566 0.388 0.403 0.297 0.0683 Epoch gpu_mem box obj cls labels img_size 11/119 10.3G 0.03529 0.004378 0 16 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:12<00:00, 1.25s/it] all 1266 1566 0.453 0.402 0.345 0.0877 Epoch gpu_mem box obj cls labels img_size 12/119 10.3G 0.03477 0.004359 0 29 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.17s/it] all 1266 1566 0.5 0.459 0.377 0.1 Epoch gpu_mem box obj cls labels img_size 13/119 10.3G 0.03443 0.004262 0 35 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.17s/it] all 1266 1566 0.422 0.405 0.295 0.0822 Epoch gpu_mem box obj cls labels img_size 14/119 10.3G 0.03445 0.004236 0 28 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.17s/it] all 1266 1566 0.488 0.436 0.366 0.1 Epoch gpu_mem box obj cls labels img_size 15/119 10.3G 0.03409 0.004301 0 41 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.461 0.473 0.394 0.102 Epoch gpu_mem box obj cls labels img_size 16/119 10.3G 0.03391 0.004185 0 29 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.17s/it] all 1266 1566 0.482 0.441 0.374 0.106 Epoch gpu_mem box obj cls labels img_size 17/119 10.3G 0.03348 0.00416 0 24 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.436 0.482 0.381 0.118 Epoch gpu_mem box obj cls labels img_size 18/119 10.3G 0.03341 0.00418 0 42 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.458 0.48 0.404 0.113 Epoch gpu_mem box obj cls labels img_size 19/119 10.3G 0.03338 0.004101 0 37 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.532 0.449 0.425 0.125 Epoch gpu_mem box obj cls labels img_size 20/119 10.3G 0.03305 0.004123 0 39 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.524 0.464 0.423 0.127 Epoch gpu_mem box obj cls labels img_size 21/119 10.3G 0.03314 0.00401 0 35 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.485 0.489 0.43 0.129 Epoch gpu_mem box obj cls labels img_size 22/119 10.3G 0.03302 0.004092 0 23 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.481 0.485 0.426 0.124 Epoch gpu_mem box obj cls labels img_size 23/119 10.3G 0.03278 0.004106 0 32 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:12<00:00, 1.21s/it] all 1266 1566 0.507 0.496 0.446 0.133 Epoch gpu_mem box obj cls labels img_size 24/119 10.3G 0.03261 0.004096 0 39 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.522 0.47 0.439 0.134 Epoch gpu_mem box obj cls labels img_size 25/119 10.3G 0.03264 0.004093 0 29 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.49 0.487 0.435 0.134 Epoch gpu_mem box obj cls labels img_size 26/119 10.3G 0.03268 0.004051 0 31 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.494 0.455 0.418 0.126 Epoch gpu_mem box obj cls labels img_size 27/119 10.3G 0.03263 0.004003 0 30 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.477 0.491 0.427 0.127 Epoch gpu_mem box obj cls labels img_size 28/119 10.3G 0.03228 0.004039 0 34 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.533 0.415 0.402 0.118 Epoch gpu_mem box obj cls labels img_size 29/119 10.3G 0.03212 0.004047 0 27 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.502 0.46 0.427 0.133 Epoch gpu_mem box obj cls labels img_size 30/119 10.3G 0.03242 0.004039 0 41 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.525 0.439 0.426 0.128 Epoch gpu_mem box obj cls labels img_size 31/119 10.3G 0.03227 0.004099 0 24 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.51 0.483 0.442 0.136 Epoch gpu_mem box obj cls labels img_size 32/119 10.3G 0.03209 0.004059 0 21 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.539 0.442 0.427 0.131 Epoch gpu_mem box obj cls labels img_size 33/119 10.3G 0.03215 0.00396 0 44 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.526 0.455 0.423 0.133 Epoch gpu_mem box obj cls labels img_size 34/119 10.3G 0.03206 0.003945 0 42 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.548 0.487 0.462 0.137 Epoch gpu_mem box obj cls labels img_size 35/119 10.3G 0.03209 0.004042 0 31 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.20s/it] all 1266 1566 0.539 0.467 0.448 0.133 Epoch gpu_mem box obj cls labels img_size 36/119 10.3G 0.03199 0.003969 0 21 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.514 0.498 0.449 0.139 Epoch gpu_mem box obj cls labels img_size 37/119 10.3G 0.03188 0.004018 0 36 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.17s/it] all 1266 1566 0.53 0.513 0.461 0.144 Epoch gpu_mem box obj cls labels img_size 38/119 10.3G 0.03183 0.004019 0 32 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.516 0.494 0.45 0.135 Epoch gpu_mem box obj cls labels img_size 39/119 10.3G 0.03183 0.004067 0 29 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.502 0.513 0.444 0.137 Epoch gpu_mem box obj cls labels img_size 40/119 10.3G 0.03179 0.003966 0 25 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.504 0.487 0.44 0.138 Epoch gpu_mem box obj cls labels img_size 41/119 10.3G 0.03184 0.003995 0 33 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.17s/it] all 1266 1566 0.533 0.455 0.437 0.134 Epoch gpu_mem box obj cls labels img_size 42/119 10.3G 0.03169 0.004019 0 40 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.15s/it] all 1266 1566 0.566 0.46 0.448 0.135 Epoch gpu_mem box obj cls labels img_size 43/119 10.3G 0.0317 0.003993 0 37 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.493 0.486 0.438 0.131 Epoch gpu_mem box obj cls labels img_size 44/119 10.3G 0.03144 0.003942 0 39 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.535 0.469 0.45 0.135 Epoch gpu_mem box obj cls labels img_size 45/119 10.3G 0.03167 0.003967 0 28 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.532 0.471 0.447 0.14 Epoch gpu_mem box obj cls labels img_size 46/119 10.3G 0.03147 0.003931 0 34 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.15s/it] all 1266 1566 0.558 0.469 0.447 0.139 Epoch gpu_mem box obj cls labels img_size 47/119 10.3G 0.03144 0.004 0 43 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.19s/it] all 1266 1566 0.54 0.481 0.451 0.136 Epoch gpu_mem box obj cls labels img_size 48/119 10.3G 0.03124 0.003896 0 21 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.478 0.495 0.426 0.134 Epoch gpu_mem box obj cls labels img_size 49/119 10.3G 0.03123 0.003858 0 28 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.559 0.444 0.429 0.132 Epoch gpu_mem box obj cls labels img_size 50/119 10.3G 0.03151 0.003957 0 41 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.515 0.506 0.447 0.134 Epoch gpu_mem box obj cls labels img_size 51/119 10.3G 0.03121 0.003953 0 20 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.15s/it] all 1266 1566 0.519 0.466 0.429 0.133 Epoch gpu_mem box obj cls labels img_size 52/119 10.3G 0.03116 0.003986 0 35 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.529 0.476 0.45 0.141 Epoch gpu_mem box obj cls labels img_size 53/119 10.3G 0.03132 0.00389 0 30 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.542 0.462 0.445 0.137 Epoch gpu_mem box obj cls labels img_size 54/119 10.3G 0.0309 0.003927 0 25 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.17s/it] all 1266 1566 0.511 0.499 0.454 0.141 Epoch gpu_mem box obj cls labels img_size 55/119 10.3G 0.031 0.003942 0 25 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.494 0.478 0.437 0.132 Epoch gpu_mem box obj cls labels img_size 56/119 10.3G 0.03088 0.003873 0 31 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.549 0.451 0.445 0.136 Epoch gpu_mem box obj cls labels img_size 57/119 10.3G 0.031 0.003948 0 31 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.15s/it] all 1266 1566 0.523 0.488 0.448 0.137 Epoch gpu_mem box obj cls labels img_size 58/119 10.3G 0.03097 0.003901 0 30 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.558 0.474 0.447 0.136 Epoch gpu_mem box obj cls labels img_size 59/119 10.3G 0.03082 0.003884 0 37 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.19s/it] all 1266 1566 0.504 0.517 0.453 0.138 Epoch gpu_mem box obj cls labels img_size 60/119 10.3G 0.03075 0.003865 0 34 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.15s/it] all 1266 1566 0.528 0.48 0.435 0.135 Epoch gpu_mem box obj cls labels img_size 61/119 10.3G 0.03078 0.003972 0 33 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.52 0.457 0.418 0.128 Epoch gpu_mem box obj cls labels img_size 62/119 10.3G 0.0308 0.003859 0 48 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.17s/it] all 1266 1566 0.48 0.49 0.424 0.133 Epoch gpu_mem box obj cls labels img_size 63/119 10.3G 0.03064 0.003959 0 31 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.535 0.462 0.44 0.133 Epoch gpu_mem box obj cls labels img_size 64/119 10.3G 0.0306 0.003923 0 36 256: 100% 80/80 [01:39<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.15s/it] all 1266 1566 0.529 0.462 0.436 0.135 Epoch gpu_mem box obj cls labels img_size 65/119 10.3G 0.03049 0.003943 0 43 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.553 0.462 0.444 0.134 Epoch gpu_mem box obj cls labels img_size 66/119 10.3G 0.03061 0.003893 0 35 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.15s/it] all 1266 1566 0.512 0.481 0.445 0.136 Epoch gpu_mem box obj cls labels img_size 67/119 10.3G 0.03046 0.00387 0 39 256: 100% 80/80 [01:40<00:00, 1.25s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 10/10 [00:11<00:00, 1.16s/it] all 1266 1566 0.517 0.497 0.448 0.139 EarlyStopping patience 30 exceeded, stopping training. 68 epochs completed in 2.184 hours. Optimizer stripped from runs/train/project-yolov5x-64-120-fold-4/weights/last.pt, 175.0MB Optimizer stripped from runs/train/project-yolov5x-64-120-fold-4/weights/best.pt, 175.0MB wandb: Waiting for W&B process to finish, PID 2188 wandb: Program ended successfully. wandb: wandb: Find user logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_032551-1obyrr4c/logs/debug.log wandb: Find internal logs for this run at: /content/gdrive/My Drive/covid19-detection/yolov5/wandb/run-20210829_032551-1obyrr4c/logs/debug-internal.log wandb: Run summary: wandb: train/box_loss 0.03046 wandb: train/obj_loss 0.00387 wandb: train/cls_loss 0.0 wandb: metrics/precision 0.51687 wandb: metrics/recall 0.49745 wandb: metrics/mAP_0.5 0.44796 wandb: metrics/mAP_0.5:0.95 0.1394 wandb: val/box_loss 0.03254 wandb: val/obj_loss 0.0019 wandb: val/cls_loss 0.0 wandb: x/lr0 0.00157 wandb: x/lr1 0.00157 wandb: x/lr2 0.00157 wandb: _runtime 7886 wandb: _timestamp 1630215437 wandb: _step 68 wandb: Run history: wandb: train/box_loss █▇▄▃▃▂▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: train/obj_loss ▂▅█▇▆▅▄▃▃▃▂▂▂▂▂▂▂▂▂▂▁▁▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: train/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: metrics/precision ▁▁▃▃▄▅▆▇▆▇▆▇▇▇▇▇▇▇▇██▇█▇████▇█▇█▇███▇██▇ wandb: metrics/recall ▁▂▃▅▅▄▅▇▆▇▇▇▇▇▇▇▇▇▆▆▇███▇▇▇▇█▆▇▇█▇▇▇▇▇▇█ wandb: metrics/mAP_0.5 ▁▁▂▂▄▄▅▇▅▇▇▇▇▇██▇▇▇▇████████▇▇▇█████▇███ wandb: metrics/mAP_0.5:0.95 ▁▁▂▂▃▃▄▆▅▆▇▆▇▇▇▇▇▇▇▇████▇███▇▇▇█████▇▇██ wandb: val/box_loss █▆▃▃▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/obj_loss ▅▇█▆▅▄▄▂▃▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: val/cls_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: x/lr0 ▁▂▃▄▅▆▇█████████▇▇▇▇▇▇▇▇▆▆▆▆▆▆▆▅▅▅▅▅▅▅▄▄ wandb: x/lr1 ▁▂▃▄▅▆▇█████████▇▇▇▇▇▇▇▇▆▆▆▆▆▆▆▅▅▅▅▅▅▅▄▄ wandb: x/lr2 █▇▆▅▄▃▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: _runtime ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇▇███ wandb: _timestamp ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇▇███ wandb: _step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ wandb: wandb: Synced 5 W&B file(s), 166 media file(s), 1 artifact file(s) and 0 other file(s) wandb: wandb: Synced project-yolov5x-64-120-fold-4: https://wandb.ai/itamardvir/YOLOv5/runs/1obyrr4c Results saved to runs/train/project-yolov5x-64-120-fold-4
Below an wandb graph which summerize the training process for the five folds.
The epochs settings is based on former experimnets, which show that longer training lead to overfiting and the model didn't improve after certain number of epochs (different number between the models). All the models stopped before the numbers of epochs by the early stopping algorithm, but still the performance of the simplest model (yolov5) set to 400 epochs give the best results.
none class¶In addition to predicting the bounding boxes, we have to give for each image the probability that there are no findings at all in the CXR as confidence for none class. The common way in kaggle to handle with this class was to train a new model to decide whether the image conatians findings or not. But according the competition description, no findings means negative case, so such a model is exactly as trainin a binary model for the negative class, what, as we saw earlier, is much less accurate than the four model class!! So, the right thing will be just use the negative score here too. As we will see later, with this simple change we can imporve any of the open solutions in kaggle also.
Putting all the above models together, the final score of this solution is 0.567.
As a solution for the competition, I could find on kaggle open solution much better than mine. Armed with the above insight about the none class, I took the best open solution I could find on kaggle before the end of the competiton, and set the none class prediction to the negtaive probablity of the study which the image belongs to. By this change I imporved the place of the solution on the LB in about 50 places, and achived a bronze medal 🥉✌✌✌!!
How could I improve my solution?
On the study level, I think that obtain more data may be very helpful. During the project, obtaining more data seems to me as something that requires a group of specialists to label the data to the four classes according to the new grading system defined for this problem. But at the end I realized that training a model which could classify negatives better or covid19 cases would be very helpful for this problem too, even if it cannot determining atypical and indeterminate cases according the grading system. More than that, any CXR taken before 2019 is for sure not a covid 19 case, so any dataset of CXR from the last years would be helpful for the problem.
On the image level, obtaining more data would be much harder. But on the image level I examined here only yolov5. There are libraries like mmdetection (which I discovered on the very last days) which contains a large set of object detection algorithm, from Fast R-CNN to YOLOX. Using a such library, It can be relatively easy to examine and compare large amount of object detection algorithms.
In this project we developed a tool for Covid19 diagnosis using CXR.
This includes diagnosis by the CXR using the predefined grading system alongside determining opacities areas in the lungs.
To do that, we developed two different models; one for the pneumonia diagnosis and the other for opacity determining.
For the diagnosis model, we examined several Resnet architectures, with differnet image resolutions. In addition, we developed a lung detector and used it to train another set of models which fed with the lung detector annotations alongside the original input. Although all the trained models gave similar results, ensembling then all together produce a siginfcant performance imporve on the test set. We conclude that additional improvement can be achived by obtaining a new dataset, and suggest a method for using external data for this problem.
For the opacity detection, we trained several yolov5 models with mAP of 0.45-0.5. to build the opacity detection ensemble.